File size: 106,038 Bytes
6fa4bc9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 |
{
"paper_id": "I11-1048",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:31:47.895752Z"
},
"title": "Crawling Back and Forth: Using Back and Out Links to Locate Bilingual Sites",
"authors": [
{
"first": "Luciano",
"middle": [],
"last": "Barbosa",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "AT&T Labs -Research",
"location": {
"addrLine": "180 Park Ave Florham Park",
"postCode": "07932",
"region": "NJ"
}
},
"email": "lbarbosa@research.att.com"
},
{
"first": "Srinivas",
"middle": [],
"last": "Bangalore",
"suffix": "",
"affiliation": {
"laboratory": "AT&T Labs -Research",
"institution": "",
"location": {
"addrLine": "180 Park Ave Florham Park",
"postCode": "07932",
"region": "NJ"
}
},
"email": ""
},
{
"first": "Vivek",
"middle": [],
"last": "Kumar",
"suffix": "",
"affiliation": {
"laboratory": "AT&T Labs -Research",
"institution": "",
"location": {
"addrLine": "180 Park Ave Florham Park",
"postCode": "07932",
"region": "NJ"
}
},
"email": "vkumar@research.att.com"
},
{
"first": "Sridhar",
"middle": [],
"last": "Rangarajan",
"suffix": "",
"affiliation": {
"laboratory": "AT&T Labs -Research",
"institution": "",
"location": {
"addrLine": "180 Park Ave Florham Park",
"postCode": "07932",
"region": "NJ"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents a novel crawling strategy to locate bilingual sites. It does so by focusing on the Web graph neighborhood of these sites and exploring the patterns of the links in this region to guide its visitation policy. A sub-task in the problem of bilingual site discovery is the job of detecting bilingual sites, i.e., given a Web site, verify whether it is bilingual or not. We perform this task by combining supervised learning and language identification. Experimental results demonstrate that our crawler outperforms previous crawling approaches and produces a high-quality collection of bilingual sites, which we evaluate in the context of machine translation in the tourism and hospitality domain. The parallel text obtained using our novel crawling strategy results in a relative improvement of 22% in BLEU score (English-to-Spanish) over an out-ofdomain seed translation model trained on the European parliamentary proceedings.",
"pdf_parse": {
"paper_id": "I11-1048",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents a novel crawling strategy to locate bilingual sites. It does so by focusing on the Web graph neighborhood of these sites and exploring the patterns of the links in this region to guide its visitation policy. A sub-task in the problem of bilingual site discovery is the job of detecting bilingual sites, i.e., given a Web site, verify whether it is bilingual or not. We perform this task by combining supervised learning and language identification. Experimental results demonstrate that our crawler outperforms previous crawling approaches and produces a high-quality collection of bilingual sites, which we evaluate in the context of machine translation in the tourism and hospitality domain. The parallel text obtained using our novel crawling strategy results in a relative improvement of 22% in BLEU score (English-to-Spanish) over an out-ofdomain seed translation model trained on the European parliamentary proceedings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Parallel texts are translations of the same text in different languages. Parallel text acquisition from the Web has received increased attention in the recent years, especially for machine translation (Melamed, 2001 ) and cross-language information retrieval (Grossman and Frieder, 2004) . For many years, the European Parliament proceedings (Koehn, 2005a) and official documents of countries with multiple languages were the only widely available parallel texts. Although these are high-quality corpora, they have some limitations:",
"cite_spans": [
{
"start": 201,
"end": 215,
"text": "(Melamed, 2001",
"ref_id": "BIBREF19"
},
{
"start": 259,
"end": 287,
"text": "(Grossman and Frieder, 2004)",
"ref_id": "BIBREF10"
},
{
"start": 342,
"end": 356,
"text": "(Koehn, 2005a)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) they tend to be domain specific (e.g., government related texts); (2) they are available in only a few languages; and (3) sometimes they are not free or there is some restriction for using them. On the other hand, Web data is free and comprises data from different languages and domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Previous research in the area of parallel Web data acquisition has mainly focused on the problems of document pair identification (Jiang et al., 2009; Uszkoreit et al., 2010; Munteanu and Marcu, 2005; Resnik and Smith, 2003; Melamed, 2001 ) and sentence alignment. Typically, document pairs are located by issuing queries to a search engine (Resnik and Smith, 2003; Hong et al., 2010) . The sentences in the matched documents are then aligned using standard dynamic programming techniques. In this work, we model the problem of obtaining parallel text in two subtasks. First, locate the sites that contain bilingual data (bilingual sites). Here we assume that parallel texts are present in the same site (Chen and Nie, 2000) . Second, extract parallel texts within these sites. While the latter problem of extracting of parallel text from bilingual Web sites has received a lot of attention, the former problem of automatically locating high quality parallel Web pages is still an open problem.",
"cite_spans": [
{
"start": 130,
"end": 150,
"text": "(Jiang et al., 2009;",
"ref_id": "BIBREF12"
},
{
"start": 151,
"end": 174,
"text": "Uszkoreit et al., 2010;",
"ref_id": "BIBREF30"
},
{
"start": 175,
"end": 200,
"text": "Munteanu and Marcu, 2005;",
"ref_id": "BIBREF20"
},
{
"start": 201,
"end": 224,
"text": "Resnik and Smith, 2003;",
"ref_id": "BIBREF26"
},
{
"start": 225,
"end": 238,
"text": "Melamed, 2001",
"ref_id": "BIBREF19"
},
{
"start": 341,
"end": 365,
"text": "(Resnik and Smith, 2003;",
"ref_id": "BIBREF26"
},
{
"start": 366,
"end": 384,
"text": "Hong et al., 2010)",
"ref_id": "BIBREF11"
},
{
"start": 704,
"end": 724,
"text": "(Chen and Nie, 2000)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a crawling strategy (Olston and Najork, 2010) to discover bilingual sites on the Web. Previous work on focused crawlers (Chakrabarti et al., 1999; Diligenti et al., 2000) has been used to locate different kinds of Web sources such as Web pages in a topic (Chakrabarti et al., 2002) , geographic information (Ahlers and Boll, 2009) and Web forms (Barbosa and Freire, 2007) by following outlinks. In contrast to these approaches, we explore the idea of using not only forward links but also backlinks. Backlinks of a page p are the links that point to p and outlinks (forward links) are the links that p points to. The reason for that is a single backlink page sometimes refers to many related pages, a phenomenon known as co-citation. Kumar et al. (Kumar et al., 1999) showed that co- citation is a common feature of Web communities and, as a result of that, Web communities are characterized by directed bipartite subgraphs. Based on that, we implemented our crawling strategy by restricting the crawler's search for bilingual sites in the bipartite graph composed by backlink pages (BPs) of the bilingual sites that were already discovered by the crawler, and pages pointed by BPs. This scheme is illustrated in Figure 1 . Our assumption, therefore, is that the Web region represented by this bipartite graph is rich in bilingual sites since backlink pages typically point to multiple bilingual sites (co-citation). Finally, to focus on the most promising regions in this graph, the crawler explores the patterns in the links to guide its visitation policy.",
"cite_spans": [
{
"start": 46,
"end": 71,
"text": "(Olston and Najork, 2010)",
"ref_id": null
},
{
"start": 146,
"end": 172,
"text": "(Chakrabarti et al., 1999;",
"ref_id": "BIBREF4"
},
{
"start": 173,
"end": 196,
"text": "Diligenti et al., 2000)",
"ref_id": "BIBREF8"
},
{
"start": 281,
"end": 307,
"text": "(Chakrabarti et al., 2002)",
"ref_id": "BIBREF5"
},
{
"start": 333,
"end": 356,
"text": "(Ahlers and Boll, 2009)",
"ref_id": "BIBREF0"
},
{
"start": 371,
"end": 397,
"text": "(Barbosa and Freire, 2007)",
"ref_id": "BIBREF1"
},
{
"start": 760,
"end": 793,
"text": "Kumar et al. (Kumar et al., 1999)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 1239,
"end": 1247,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A sub-task in the problem of bilingual site discovery is the job of detecting bilingual sites, i.e., given a Web site, verify whether it is bilingual or not. A simple approach to this task is to search the entire Web site for parallel text. However, this is computationally expensive since Web sites might contain hundreds/thousands of pages. We propose a low-cost strategy that visits very few pages in the Web site to make its prediction regarding the presence of bilingual text. Given a Web site's page, we use supervised learning to identify links on the page that are good candidates to point to parallel text within the site. Subsequently, our strategy verifies whether the pages pointed by the candidate links are in fact in the languages of interest.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The main contributions of this paper can be summarized as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 A new focused crawling strategy that explores the concept of co-citation by restricting the search for targeted sources (bilingual sites in this paper) in the bipartite graph com-posed by the backlink pages of the targeted sources already discovered by the crawler, and the forward links pointed to by the backlink pages. The crawler uses link classifiers specialized in each set of the URLs of the pages (backward and forward pages) of the bipartite graph to focus on the most promising regions in this graph;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 A high-precision and efficient approach to detecting bilingual sites based on supervised learning and language identification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of the paper is organized as follows. In Section 2, we present our approach to locating and detecting a bilingual site. We present experimental results in Section 3 and demonstrate the efficacy of our approach in the context of machine translation in Section 4. We review related work in Section 5 and conclude in Section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A naive approach to collect parallel text would be to check for every pair of pages on the Web. However, this is computationally prohibitive given the scale of the Web. To make the search for parallel text more feasible, previous approaches made the assumption that parallel texts mainly occur within Web sites (Chen and Nie, 2000) . Thus, the search for parallel text can be comprised of two steps. First, locate bilingual sites, and then, extract the parallel text from them. While previous approaches (Resnik and Smith, 2003; Zhang et al., 2006) have mainly focused on the latter problem of extracting sentence aligned parallel text from web, we are interested in the former problem of locating such sites.",
"cite_spans": [
{
"start": 311,
"end": 331,
"text": "(Chen and Nie, 2000)",
"ref_id": "BIBREF6"
},
{
"start": 504,
"end": 528,
"text": "(Resnik and Smith, 2003;",
"ref_id": "BIBREF26"
},
{
"start": 529,
"end": 548,
"text": "Zhang et al., 2006)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual Site Crawler",
"sec_num": "2"
},
{
"text": "The architecture of our crawler is presented in Figure 2 . The crawler downloads a page, p and sends it to the bilingual site detector (BS Detector). If the BS Detector predicts that the site represented by p contains parallel text (see Section 2.1), the Backlink Crawler collects the backlinks of p, i.e., links that point to p, by using a search engine backlink API. The Backlink Classifier predicts the relevance of these links (see Section 2.3), and adds them to the queue that represent these links in the Frontier (backlink queue). The most promising backlink is then sent by the Frontier Scheduler to the Crawler, which downloads its content. Next, the Page Parser extracts the forward links of the backlink page and adds the most promising forward links (as identified by Forward-Link Classifier) to the forward-link queue. The Frontier Scheduler then decides the next link to be sent to the crawler. We present the core elements of the crawler in the sections below.",
"cite_spans": [],
"ref_spans": [
{
"start": 48,
"end": 56,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Bilingual Site Crawler",
"sec_num": "2"
},
{
"text": "The performance of the bilingual site detection is essential to obtain a high-quality collection of bilingual sites. Zhang et al. (Zhang et al., 2006) perform this task by extracting the anchor text and image alt text from pages in the Web sites and match them with a pre-defined list of strings in the languages of interest. If the Web site contains at least two matched links in the different languages then it is considered as bilingual. The approach suffers from the drawback of low recall since bilingual sites that contain patterns outside the list may be missed. Another approach (Ma and Liberman, 1999) verifies the presence of bilingual text at pages of the top 3 or 4 levels of the Web site by using a language identifier. This approach can be very expensive as one might need to download a considerable portion of the Web site to make a decision.",
"cite_spans": [
{
"start": 117,
"end": 150,
"text": "Zhang et al. (Zhang et al., 2006)",
"ref_id": "BIBREF31"
},
{
"start": 587,
"end": 610,
"text": "(Ma and Liberman, 1999)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual Site Detection",
"sec_num": "2.1"
},
{
"text": "Our solution to detecting parallel sites has some similarities with these previous approaches but tries to address their main limitations. First, instead of using a pre-defined list of patterns to detect bilingual sites, we use supervised learning to predict if a given page has links to parallel data (Link Predictor). Second, to avoid downloading a great portion of the Web site, the BS Detector only verifies whether the pages whose URLs are considered relevant by the Link Predictor are in different languages. Link-Based Prediction. The role of the Link Predictor is to identify links that point to parallel text in a Web site. Our assumption is that pages of bilingual sites contain some common link patterns. For instance, pages in English might have a link to its version in Spanish, containing words such as \"espanol\" and \"castellano\" in its anchor, URL, etc. However, there are cases whereby the link does not provide any visible textual information to the user. Instead, only an image (usually a country flag) might represent the link. In these cases, textual information in the fields of the img html tag (e.g. alt and src) might be helpful. In order to be able to handle different types of patterns in the links, the Link Predictor uses features in 5 different contexts: tokens in the URL, anchor, around, image alt and image src. For this paper, we built the training data from non-bilingual and bilingual sites in English/Spanish. It was compiled by manually labeling 560 URLs (236 relevant and 324 non-relevant). We use probabilistic SVM (Platt, 1999) as the learning algorithm to create the Link Predictor. Probabilistic SVM is a suitable choice for this classification as it performs well on text data, and we are also interested in the class likelihood of the instances.",
"cite_spans": [
{
"start": 1554,
"end": 1567,
"text": "(Platt, 1999)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual Site Detection",
"sec_num": "2.1"
},
{
"text": "In essence, the Link Predictor works as a lowcost filter, its cost is associated with the link classifications which is very low. It also considerably prunes the search space for subsequent steps that are typically more expensive computationally.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual Site Detection",
"sec_num": "2.1"
},
{
"text": "Language Identification. In the second step of the bilingual site detection, the BS Detector verifies if the pages whose links were considered relevant by the Link Predictor are in the languages of interest. The motivation behind the use of language identification for our problem is, since we are interested in bilingual text, only looking at individual links of these sites might not suffice. In addition to identify the language of the pages of candidate links identified by the Link Predictor, language identification is also performed on the page that contains such links, i.e., the page that was provided as input to the BS Detector. This handles cases in which a page in a given language only contains a link to its translation in other language but not links to both versions. The language identification is then performed in all pages of that candidate list and if different pages are in the language of interest, the site is considered as bilingual. To detect the language of a given page, we use the textcat (Cavnar and Trenkle, 1994) Language Identifier.",
"cite_spans": [
{
"start": 1019,
"end": 1045,
"text": "(Cavnar and Trenkle, 1994)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual Site Detection",
"sec_num": "2.1"
},
{
"text": "Even though there is some cost in downloading the pages to perform this step, we show later in this section that it is only necessary to download Evaluation. To measure the quality of the BS Detector, we manually labeled 200 Web sites (100 positive and 100 negative) from the dmoz directory in topics related to Spanish speaking countries. A site was considered as relevant if it contained at least a pair of parallel pages. Our approach is similar to that employed by (Resnik and Smith, 2003) to label parallel text.",
"cite_spans": [
{
"start": 469,
"end": 493,
"text": "(Resnik and Smith, 2003)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual Site Detection",
"sec_num": "2.1"
},
{
"text": "Since the Link Predictor outputs the likelihood of a relevant link, we varied the minimum likelihood for a link be considered as relevant. For each value, we measured its quality (precision, recall and F-measure), as well as its cost (number of downloaded pages per site in the language identification step). Table 1 presents the results for the BS Detector and the Link Predictor (first step of the BS Detector). When the minimum likelihood is 0, the language identification process checks all the links in the given pages for pairs of languages, i.e., the Link Predictor considers all the links as relevant. In this scenario, an average of 29 pages per Web site were downloaded and the recall of the BS Detector was 0.86. This implies that the language identifier was not able to detect pairs of languages in 16% of the relevant sites. As expected, the minimum likelihood is directly proportional to the precision and inversely proportional to the recall. It is interesting to note that between 0.5 and 0.8, these values do not change for the BS Detector besides the decreasing of cost. The Link Predictor shows a similar behavior. Another important observation to glean is that adding the language detection on top of the Link Predictor improves the overall precision of the bilingual site detection. For instance, when the minimum likelihood is set to 0.5, the Link Predictor's precision is 0.75 whereas that of the BS Detector is 0.95. The high precision of the BS Detector is very important to build a high-quality set of bilingual sites.",
"cite_spans": [],
"ref_spans": [
{
"start": 309,
"end": 316,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Bilingual Site Detection",
"sec_num": "2.1"
},
{
"text": "In this section, we focus our attention to our solution to locating bilingual sites on the Web. Previous work (Ma and Liberman, 1999 ) tries to perform this task by restricting the crawler in a top-level internet domain where it is supposed to contain a high concentration of these sites. For instance, Ma and Liberman (Ma and Liberman, 1999) focused the crawler in .de domain since they were interested in German/English language pairs. In this work, we do not restrict the crawler to any particular internet domain or topic. Our objective is to allow the crawler to perform a broad search while avoiding visits to unproductive Web regions.",
"cite_spans": [
{
"start": 110,
"end": 132,
"text": "(Ma and Liberman, 1999",
"ref_id": "BIBREF18"
},
{
"start": 319,
"end": 342,
"text": "(Ma and Liberman, 1999)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Crawling Policy",
"sec_num": "2.2"
},
{
"text": "We implemented this strategy by imposing the constraint that the crawler stays in the Web neighborhood graph of the bilingual sites that were previously discovered by the crawler. More specifically, the crawler explores the neighborhood graph defined by the bipartite graph composed by the backlink pages (BPs) of bilingual sites and the pages pointed by BPs (forward pages), see Figure 1. As we mentioned before, this strategy is based on the findings that Web communities are characterized by directed bipartite subgraphs (Kumar et al., 1999) . Thus, our assumption is that the Web region comprised by this bipartite graph is rich in bilingual sites as backlink pages typically point to multiple bilingual sites. Finally, as we are looking for Web sites and not for single Web pages, the crawler only considers out-of-site links, i.e., it excludes from the bipartite graph links to internal pages of the sites.",
"cite_spans": [
{
"start": 524,
"end": 544,
"text": "(Kumar et al., 1999)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 380,
"end": 386,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Crawling Policy",
"sec_num": "2.2"
},
{
"text": "The steps of our algorithm are shown in Algorithm 1. Initially, the user provides a set of seed URLs that are added to the frontier. The crawler then starts to download the links in the frontier. If the BS Detector identifies a page in a bilingual site, the backlinks to this page are collected and added back to the frontier. Backlink information can be retrieved through the \"link:\" API provided by search engines such as Google and Yahoo! (Bharat et al., 1998) . In the next step, pages represented by the backlinks (backlink pages) are downloaded, their outlinks are extracted and added to the frontier. Notice that only the outlinks from the backlink pages are added to the frontier. The crawler does not explore outlinks of forward pages (forward pages are pages pointed by backlink pages, see Figure 1 ).",
"cite_spans": [
{
"start": 442,
"end": 463,
"text": "(Bharat et al., 1998)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 800,
"end": 808,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Crawling Policy",
"sec_num": "2.2"
},
{
"text": "Retaining the crawler in the graph neighborhood of bilingual sites (the bipartite graph) is our first attempt towards an effective search for such sites. However, there may be many links in the graph that do not lead to relevant sites. In order to identify promising URLs in the two different page sets of the bipartite graph, we employ supervised learning. For each set (backlink and forward sets), the crawler builds a classifier that outputs the relevance of a given link in that particular set. Relevant links in the forward pages' set represent URLs of bilingual sites, i.e., links that give immediate benefit, whereas relevant links in the backlink pages' set are URLs of backlink pages that contain outlinks to bilingual sites (delayed benefit).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Forward-Link and Backlink Classifiers",
"sec_num": "2.3"
},
{
"text": "Previous approaches for focused crawling (Chakrabarti et al., 2002; Rennie and McCallum, 1999; Barbosa and Freire, 2007) also use patterns on links to prioritize them. But instead of using link classifiers specialized in different link layers, they build a single classifier. The advantage of having multiple classifiers is that it decomposes a complex problem into simpler subproblems in which each classifier is dedicated to a subset of more homogeneous hypothesis (Gangaputra and Geman, 2006) . Diligenti et al. (Diligenti et al., 2000) also proposed the use of multiple classifiers to guide the crawler. But instead of looking at link patterns, they use the content of the pages.",
"cite_spans": [
{
"start": 41,
"end": 67,
"text": "(Chakrabarti et al., 2002;",
"ref_id": "BIBREF5"
},
{
"start": 68,
"end": 94,
"text": "Rennie and McCallum, 1999;",
"ref_id": "BIBREF25"
},
{
"start": 95,
"end": 120,
"text": "Barbosa and Freire, 2007)",
"ref_id": "BIBREF1"
},
{
"start": 467,
"end": 495,
"text": "(Gangaputra and Geman, 2006)",
"ref_id": "BIBREF9"
},
{
"start": 515,
"end": 539,
"text": "(Diligenti et al., 2000)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Forward-Link and Backlink Classifiers",
"sec_num": "2.3"
},
{
"text": "In summary, the Forward-Link Classifier predicts the most promising links for the forward pages, whereas the Backlink Classifier identifies the most promising links for the backlink pages. Both classifiers use as features the neighborhood of links. The link neighborhood is composed by four contextual categories: URL (without the host), host, anchor, and text around the link. Since the number of extracted features tends to be large (and most of them have very low frequency), we remove stop-words and stem the remaining words. Note that features are associated with a context. For example, if the word \"hotel\" appears both in the URL and anchor text of a link, it is added as a feature in both contexts. It is important to note that words in the host context have an important role, since many parallel corpus sites are in a country's internet domain, e.g., es, de, etc. In fact, as we mentioned before, some previous approaches (Ma and Liberman, 1999) restrict the crawl within these domains to collect parallel data. But instead of pre-defining a set of domains, the crawler in our work automatically identifies the most important ones during its crawling process.",
"cite_spans": [
{
"start": 932,
"end": 955,
"text": "(Ma and Liberman, 1999)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Forward-Link and Backlink Classifiers",
"sec_num": "2.3"
},
{
"text": "As one can expect, the two classifiers perform a different role. For the Backlink Classifier, features such as \"link\" and \"directory\" in the URL obtained have high frequency in the training data. These words usually occur in the URL of pages that point to many different sites, e.g., http://www.rentaccomspain.com/links.asp. The Forward-Link Classifier is more focused on topics. Words such as \"hotel\", \"air\", \"art\" and \"language\" were some of the frequent features used by it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Forward-Link and Backlink Classifiers",
"sec_num": "2.3"
},
{
"text": "The two classifiers are automatically created during the crawling process. Initially, the crawler starts with no link prioritization. After a specified number of crawled pages, a learning iteration is performed by collecting the link neighborhood of the links that point to relevant and non-relevant pages in each set. The result of this process is used as training data for the Backlink and Forward-Link classifiers. Similar to previous focused crawling approaches (Chakrabarti et al., 2002; Barbosa and Freire, 2007) , we use Naive Bayes algorithm for this purpose. As a final step, the relevance of the links in the frontier are updated based on the new classifiers.",
"cite_spans": [
{
"start": 466,
"end": 492,
"text": "(Chakrabarti et al., 2002;",
"ref_id": "BIBREF5"
},
{
"start": 493,
"end": 518,
"text": "Barbosa and Freire, 2007)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Forward-Link and Backlink Classifiers",
"sec_num": "2.3"
},
{
"text": "In this section, we assess our crawling strategy to locate bilingual sites and compare it with other crawling approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Crawling Experiments",
"sec_num": "3"
},
{
"text": "Crawling Strategies. We executed the following crawling strategies to locate bilingual sites:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "3.1"
},
{
"text": "\u2022 Forward Crawler (FC): The forward crawler randomly follows the forward links without any restriction;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "3.1"
},
{
"text": "\u2022 Focused Crawler (FocC): although our strategy does not restrict its search to a particular domain, we set up a focused crawler (Chakrabarti et al., 2002) in the travel domain for comparison. The focused crawler is composed by a page classifier that restricts the crawl to pages in the travel domain and a link classifier that guides the crawler's link visitation to avoid unproductive Web regions (see (Chakrabarti et al., 2002) for more details);",
"cite_spans": [
{
"start": 129,
"end": 155,
"text": "(Chakrabarti et al., 2002)",
"ref_id": "BIBREF5"
},
{
"start": 404,
"end": 430,
"text": "(Chakrabarti et al., 2002)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "3.1"
},
{
"text": "\u2022 Out-of-site Back/Forward Crawler (OBFC): The out-of-site back/forward crawler uses the crawling strategy proposed in this paper without any prioritization to the links;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "3.1"
},
{
"text": "\u2022 Classifier-Based Out-of-site Back/Forward Crawler (COBFC): The classifier-based out-ofsite back/forward is the OBFC along with the Backlink and Forward-link classifiers to prioritize the links in the frontier. Both classifiers are created after crawling 20,000 pages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "3.1"
},
{
"text": "We set up the crawlers to locate bilingual sites in English and Spanish. Each configuration collected 100,000 pages and 1,000 links were provided as seeds. These were randomly selected from the URLs available on the Open Directory Project 1 related to Spanish speaking countries. Effectiveness measure. The performance of the crawling strategies was measured by the total number of bilingual sites collected after the bilingual site detection during the crawl. The minimum likelihood used by BS Detector to consider a link as relevant was 0.8 since we are interested in obtain a high-quality collection of bilingual sites (see Section 2.1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "3.1"
},
{
"text": "In Figure 3 , we present the total of bilingual sites collected by each crawling configuration after collecting 100,000 pages. Our crawling strategy, COBFC, collected the greatest number of bilingual sites (6598 sites). This result empirically confirms that our approach of restricting the crawler to the neighborhood of bilingual sites by using back and forward links, along with classifiers that prioritize these links is in fact effective for locating bilingual sites. The comparison between the top two strategies, namely, COBFC (6598 bilingual sites) and OBFC (3894 bilingual sites) shows that: (1) the Backlink and Forward-link classifiers used to prioritize the links in the frontier improve the crawler's performance; and (2) even with no link prioritization, our strategy of restricting the search to the bipartite graph of backlink and forward pages is able to obtain good results. We can conclude from these numbers that bilingual sites are close to each other when one considers their backlinks. As we mentioned previously, this can be attributed to the fact that backlinks are typically hubs to bilingual sites.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Assessing the Bilingual Site Crawler",
"sec_num": "3.2"
},
{
"text": "From the experimental results, it is clear that our crawling is effective for locating bilingual sites on the Web. The main limitation, however, is that it relies on an external component (search engine) to provide backlinks. In the experiments presented in this work, the use of a search engine slowed down the crawling execution since we did not want to submit many requests to the search engine and consequently have the backlink requests halted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assessing the Bilingual Site Crawler",
"sec_num": "3.2"
},
{
"text": "A final note regarding our crawling strategy is that even though we do not restrict it to any particular topic, as the crawling process evolves, it automatically focuses on topics where there is a higher concentration of parallel data, as travel, translator sites, etc. This is different from conventional approaches that explicitly constrain the crawl based on topics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assessing the Bilingual Site Crawler",
"sec_num": "3.2"
},
{
"text": "In this section, we exploit the parallel text obtained through our crawling strategy as augmented data in machine translation. We use a phrase-based statistical machine translation system (Koehn et al., 2007) in all the experiments.",
"cite_spans": [
{
"start": 188,
"end": 208,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Translation Experiments",
"sec_num": "4"
},
{
"text": "Web data. We focus on English and Spanish as the bilingual pair of languages. We used the crawling strategy presented in the previous section to obtain a set of 20186 bilingual sites. The parallel text from these sites was mined using the technique presented in (Rangarajan et al., 2011) . A total of initial 4.84M bilingual sentence pairs were obtained from this process. We used length-based and word-based filters as well as a language model to filter these initial sentence pairs. After cleanup, a total of 2,039,272 bilingual sentence pairs was obtained from the crawling data. Development and Test Data. In order to obtain a representative reference development and test set, we manually created bilingual sentences in the hospitality and tourism domain. A bilingual speaker was given instructions to create dialogs in a variety of travel scenarios such as making a hotel reservation, booking a taxi, checking into a hotel, calling front desk and reporting problems, etc. A total of 49 scenarios were created that resulted in 1019 sentences, 472 of which was used for development and 547 for testing. The dialogs were created in English and then translated to Spanish. The development and test sets are not very large, mainly because creating high quality bilingual data for a particular domain is expensive. We have given due consideration to create a test set that is highly similar to the domain of operation. We are working on evaluating the performance of the crawled data on different domains as part of future work (as we translate more data through human annotations). MT Models. We performed machine translation experiments in both directions, English-Spanish and Spanish-English. Europarl data is the only source of parallel data (English/Spanish) that we have access to, and hence it serves as the data for our baseline translation model. Although the model can be considered to be out-of-domain with respect to our test domain, its language style is more similar to our test set (spoken dialogs) in comparison with the Web data. The Europarl data comprised 1.48M bilingual sentence pairs.",
"cite_spans": [
{
"start": 262,
"end": 287,
"text": "(Rangarajan et al., 2011)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "The Web data translation model was trained on the sentences resulting from the Web crawler. We also used a combination of the two models that we call as combined model. The combined model uses both the phrase tables during decoding. The reordering table was also concatenated from the two models. Table 2 presents the translation performance in terms of various metrics such as BLEU (Papineni et al., 2002) , METEOR (Lavie and Denkowski, 2010) and Translation Edit Rate (TER) (Snover et al., 2006 ). The language model was a 5 gram language model optimized on the development set based on perplexity and the translation weights of the log-linear model were learned using Minimum Error Rate Training.",
"cite_spans": [
{
"start": 383,
"end": 406,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF22"
},
{
"start": 416,
"end": 443,
"text": "(Lavie and Denkowski, 2010)",
"ref_id": "BIBREF17"
},
{
"start": 476,
"end": 496,
"text": "(Snover et al., 2006",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 297,
"end": 304,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "While the out-of-domain model trained using Europarl data achieves a BLEU score of 20.65 on the test set (tourism and hospitality domain) for English-Spanish, the model constructed by augmenting the web crawling data to europarl data achieves a relative improvement of 22%. Similar improvements hold for Spanish-English translation. The METEOR scores reported in Table 2 were computed only for exact match (synonyms and stemmed matches were not considered). For all three objective metrics, we achieve significant improvements in translation performance. The results demonstrate the efficacy of our bilingual crawling approach for harvesting parallel text for machine translation. The bilingual crawler can be initialized with a different policy based on the test domain of interest and hence our scheme is generalizable.",
"cite_spans": [],
"ref_spans": [
{
"start": 363,
"end": 370,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "Regarding the results of Europarl versus Web data alone, the reason for the lower translation ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "There are basically two main types of approaches to locate parallel corpora: query-based (Resnik and Smith, 2003; Resnik, 1998; Chen and Nie, 2000; Tom\u00e1s et al., 2005) and crawling-based (Ma and Liberman, 1999; Chen et al., 2004) . Query-based approaches typically try to explore common patterns that occur in this kind of data by using them as search queries. For instance, STRAND (Resnik and Smith, 2003; Resnik, 1998) tries to locate candidate parallel pages by issuing queries like: (anchor:\"english\" OR anchor:\"anglais\") AND (anchor:\"french\" OR anchor:\"francais\"). Chen and Nie (Chen and Nie, 2000) used a similar principle to obtain two sets of candidate sites by issuing queries as anchor:\"english version\" to a search engine, and then taking the union. More recently, Hong et al. (Hong et al., 2010) proposed a method that discovers document pairs by first selecting the top words in a source language document, translating these words and issuing them as a query to a search engine. The main limitation of these previous approaches is that they only rely on the search engine results to obtain the parallel pages. And, since search engines restrict the total number of results per query and the number of requests, there is a limitation in terms of the total number of sites that can be collected. This is confirmed by the numbers presented in their experimental evaluation. For instance, Chen and Nie (Chen and Nie, 2000) reported a total of only 185 candidate sites for English-Chinese corpora.",
"cite_spans": [
{
"start": 89,
"end": 113,
"text": "(Resnik and Smith, 2003;",
"ref_id": "BIBREF26"
},
{
"start": 114,
"end": 127,
"text": "Resnik, 1998;",
"ref_id": "BIBREF27"
},
{
"start": 128,
"end": 147,
"text": "Chen and Nie, 2000;",
"ref_id": "BIBREF6"
},
{
"start": 148,
"end": 167,
"text": "Tom\u00e1s et al., 2005)",
"ref_id": "BIBREF29"
},
{
"start": 187,
"end": 210,
"text": "(Ma and Liberman, 1999;",
"ref_id": "BIBREF18"
},
{
"start": 211,
"end": 229,
"text": "Chen et al., 2004)",
"ref_id": "BIBREF7"
},
{
"start": 382,
"end": 406,
"text": "(Resnik and Smith, 2003;",
"ref_id": "BIBREF26"
},
{
"start": 407,
"end": 420,
"text": "Resnik, 1998)",
"ref_id": "BIBREF27"
},
{
"start": 583,
"end": 603,
"text": "(Chen and Nie, 2000)",
"ref_id": "BIBREF6"
},
{
"start": 788,
"end": 807,
"text": "(Hong et al., 2010)",
"ref_id": "BIBREF11"
},
{
"start": 1411,
"end": 1431,
"text": "(Chen and Nie, 2000)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "With respect to crawling-based approaches for locating parallel text, there is not much prior work in this area. In fact, most of the research in this area is focused more on the problem of identifying the text pairs (Munteanu and Marcu, 2005; Zhang et al., 2006; Uszkoreit et al., 2010) than actually locating them. They typically use simple strategies to locate parallel text without exploring the Web link structure. For example, Ma and Liberman (Ma and Liberman, 1999) try to achieve this goal by simply restricting the crawler within in a particular internet domain whereby there might be a good chance of finding this kind of data.",
"cite_spans": [
{
"start": 217,
"end": 243,
"text": "(Munteanu and Marcu, 2005;",
"ref_id": "BIBREF20"
},
{
"start": 244,
"end": 263,
"text": "Zhang et al., 2006;",
"ref_id": "BIBREF31"
},
{
"start": 264,
"end": 287,
"text": "Uszkoreit et al., 2010)",
"ref_id": "BIBREF30"
},
{
"start": 449,
"end": 472,
"text": "(Ma and Liberman, 1999)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "This paper presents a novel focused crawling strategy to locate bilingual sites. It keeps its search in the bipartite graph composed by the backlink pages of bilingual sites (already discovered by the crawler) and the pages pointed by them. To focus on the most promising regions in this graph, the crawler explores the patterns presented in its links to guide its visitation policy. Another novelty proposed in this paper is our low-cost and high-precision strategy to detect a bilingual site. It performs this task in two steps. First, it relies on common patterns found in the internal links of these sites to compose a classifier that identifies link pages as entry points to parallel data in these sites. Second, it verifies whether these pages are in fact in the languages of interest. Our experiments showed that our crawling strategy is more effective in finding bilingual sites than the baseline approaches and that our bilingual site detection has high-precision while being efficient. We also demonstrated the efficacy of our crawling approach by performing machine translation experiments using the parallel text obtained from the bilingual sites identified by the crawler.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "An interesting venue to pursue in a future work is to verify whether the crawling strategy proposed in this paper also works in other types of domains where regular focused crawling may have issues in finding the targeted Web sources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Adaptive geospatially focused crawling",
"authors": [
{
"first": "D",
"middle": [],
"last": "Ahlers",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Boll",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceeding of the 18th ACM conference on Information and knowledge management",
"volume": "",
"issue": "",
"pages": "445--454",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Ahlers and S. Boll. 2009. Adaptive geospatially focused crawling. In Proceeding of the 18th ACM conference on Information and knowledge management, pages 445-454.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "An adaptive crawler for locating hidden-web entry points",
"authors": [
{
"first": "L",
"middle": [],
"last": "Barbosa",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Freire",
"suffix": ""
}
],
"year": 2007,
"venue": "WWW",
"volume": "",
"issue": "",
"pages": "441--450",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Barbosa and J. Freire. 2007. An adaptive crawler for lo- cating hidden-web entry points. In WWW, pages 441-450.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The connectivity server: Fast access to linkage information on the web. Computer Networks and ISDN Systems",
"authors": [
{
"first": "K",
"middle": [],
"last": "Bharat",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Broder",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Henzinger",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Venkatasubramanian",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "30",
"issue": "",
"pages": "469--477",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Bharat, A. Broder, M. Henzinger, P. Kumar, and S. Venkatasubramanian. 1998. The connectivity server: Fast access to linkage information on the web. Computer Networks and ISDN Systems, 30(1-7):469-477.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "N-gram-based text categorization",
"authors": [
{
"first": "W",
"middle": [
"B"
],
"last": "Cavnar",
"suffix": ""
},
{
"first": "J",
"middle": [
"M"
],
"last": "Trenkle",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "161--175",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W.B. Cavnar and J.M. Trenkle. 1994. N-gram-based text categorization. pages 161-175.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Focused crawling: A new approach to topic-specific web resource discovery",
"authors": [
{
"first": "S",
"middle": [],
"last": "Chakrabarti",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Berg",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Dom",
"suffix": ""
}
],
"year": 1999,
"venue": "Computer Networks",
"volume": "",
"issue": "",
"pages": "1623--1640",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Chakrabarti, M. Berg, and B. Dom. 1999. Focused crawl- ing: A new approach to topic-specific web resource dis- covery. Computer Networks, 31(11-16):1623-1640.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Accelerated focused crawling through online relevance feedback",
"authors": [
{
"first": "S",
"middle": [],
"last": "Chakrabarti",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Punera",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Subramanyam",
"suffix": ""
}
],
"year": 2002,
"venue": "WWW",
"volume": "",
"issue": "",
"pages": "148--159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Chakrabarti, K. Punera, and M. Subramanyam. 2002. Ac- celerated focused crawling through online relevance feed- back. In WWW, pages 148-159.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Parallel web text mining for cross-language IR",
"authors": [
{
"first": "J",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "J",
"middle": [
"Y"
],
"last": "Nie",
"suffix": ""
}
],
"year": 2000,
"venue": "RIAO",
"volume": "1",
"issue": "",
"pages": "62--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Chen and J.Y. Nie. 2000. Parallel web text mining for cross-language IR. In RIAO, volume 1, pages 62-78.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Discovering parallel text from the World Wide Web",
"authors": [
{
"first": "J",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Chau",
"suffix": ""
},
{
"first": "C",
"middle": [
"H"
],
"last": "Yeh",
"suffix": ""
}
],
"year": 2004,
"venue": "workshop on Australasian information security, Data Mining and Web Intelligence, and Software Internationalisation",
"volume": "",
"issue": "",
"pages": "161--165",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Chen, R. Chau, and C.H. Yeh. 2004. Discovering paral- lel text from the World Wide Web. In workshop on Aus- tralasian information security, Data Mining and Web In- telligence, and Software Internationalisation, pages 161- 165.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Focused Crawling Using Context Graphs",
"authors": [
{
"first": "M",
"middle": [],
"last": "Diligenti",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Coetzee",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Lawrence",
"suffix": ""
},
{
"first": "C",
"middle": [
"Lee"
],
"last": "Giles",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Gori",
"suffix": ""
}
],
"year": 2000,
"venue": "VLDB",
"volume": "",
"issue": "",
"pages": "527--534",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Diligenti, F. Coetzee, S. Lawrence, C. Lee Giles, and M. Gori. 2000. Focused Crawling Using Context Graphs. In VLDB, pages 527-534.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A design principle for coarse-to-fine classification",
"authors": [
{
"first": "S",
"middle": [],
"last": "Gangaputra",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Geman",
"suffix": ""
}
],
"year": 2006,
"venue": "Computer Vision and Pattern Recognition",
"volume": "2",
"issue": "",
"pages": "1877--1884",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Gangaputra and D. Geman. 2006. A design principle for coarse-to-fine classification. In Computer Vision and Pat- tern Recognition, volume 2, pages 1877-1884.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Information retrieval: Algorithms and heuristics",
"authors": [
{
"first": "D",
"middle": [
"A"
],
"last": "Grossman",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Frieder",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D.A. Grossman and O. Frieder. 2004. Information retrieval: Algorithms and heuristics. Kluwer Academic Pub.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "An empirical study on web mining of parallel data",
"authors": [
{
"first": "G",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Rim",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics, COLING '10",
"volume": "",
"issue": "",
"pages": "474--482",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Hong, C. Li, M. Zhou, and H. Rim. 2010. An empirical study on web mining of parallel data. In Proceedings of the 23rd International Conference on Computational Lin- guistics, COLING '10, pages 474-482, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Mining bilingual data from the web with adaptively learnt patterns",
"authors": [
{
"first": "L",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "870--878",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Jiang, S. Yang, M. Zhou, X. Liu, and Q. Zhu. 2009. Min- ing bilingual data from the web with adaptively learnt pat- terns. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL, pages 870-878.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Moses: open source toolkit for statistical machine translation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Constantin",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Herbst",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "177--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Koehn, H. Hoang, A. Birch, C. Callison-Burch, M. Fed- erico, N. Bertoldi, B. Cowan, W. Shen, C. Moran, R. Zens, C. Dyer, O. Bojar, A. Constantin, and E. Herbst. 2007. Moses: open source toolkit for statistical machine transla- tion. In Proceedings of ACL, pages 177-180.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Europarl: A parallel corpus for statistical machine translation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2005,
"venue": "MT summit",
"volume": "5",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Koehn. 2005a. Europarl: A parallel corpus for statistical machine translation. In MT summit, volume 5.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Europarl: A parallel corpus for statistical machine translation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2005,
"venue": "MT Summit",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Koehn. 2005b. Europarl: A parallel corpus for statistical machine translation. In MT Summit.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Trawling the Web for emerging cyber-communities",
"authors": [
{
"first": "R",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Raghavan",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Rajagopalan",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Tomkins",
"suffix": ""
}
],
"year": 1999,
"venue": "Computer networks",
"volume": "",
"issue": "",
"pages": "1481--1493",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Kumar, P. Raghavan, S. Rajagopalan, and A. Tomkins. 1999. Trawling the Web for emerging cyber-communities. Computer networks, 31(11-16):1481-1493.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The METEOR metric for automatic evaluation of machine translation",
"authors": [
{
"first": "A",
"middle": [],
"last": "Lavie",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Denkowski",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Lavie and M. Denkowski. 2010. The METEOR metric for automatic evaluation of machine translation. Machine Translation.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Bits: A method for bilingual text search over the web",
"authors": [
{
"first": "X",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Liberman",
"suffix": ""
}
],
"year": 1999,
"venue": "Machine Translation Summit VII",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "X. Ma and M. Liberman. 1999. Bits: A method for bilingual text search over the web. In Machine Translation Summit VII.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Empirical methods for exploiting parallel texts",
"authors": [
{
"first": "I",
"middle": [
"D"
],
"last": "Melamed",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I.D. Melamed. 2001. Empirical methods for exploiting par- allel texts. MIT Press.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Improving machine translation performance by exploiting non-parallel corpora",
"authors": [
{
"first": "D",
"middle": [
"S"
],
"last": "Munteanu",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2005,
"venue": "Comput. Linguist",
"volume": "31",
"issue": "",
"pages": "477--504",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. S. Munteanu and D. Marcu. 2005. Improving machine translation performance by exploiting non-parallel cor- pora. Comput. Linguist., 31:477-504, December.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "BLEU: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "K",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "W.-J",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of ACL.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods",
"authors": [
{
"first": "J",
"middle": [
"C"
],
"last": "Platt",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "61--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. C. Platt. 1999. Probabilistic outputs for support vector machines and comparisons to regularized likelihood meth- ods. pages 61-74.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A Scalable Approach for Building a Parallel Corpus from the Web",
"authors": [
{
"first": "V",
"middle": [
"K S"
],
"last": "Rangarajan",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Barbosa",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Bangalore",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of 12th Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. K. S. Rangarajan, L. Barbosa, and S. Bangalore. 2011. A Scalable Approach for Building a Parallel Corpus from the Web. In Proceedings of 12th Annual Conference of the International Speech Communication Association.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Using Reinforcement Learning to Spider the Web Efficiently",
"authors": [
{
"first": "J",
"middle": [],
"last": "Rennie",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 1999,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "335--343",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Rennie and A. McCallum. 1999. Using Reinforcement Learning to Spider the Web Efficiently. In ICML, pages 335-343.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "The web as a parallel corpus",
"authors": [
{
"first": "P",
"middle": [],
"last": "Resnik",
"suffix": ""
},
{
"first": "N",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "29",
"issue": "3",
"pages": "349--380",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Resnik and N.A. Smith. 2003. The web as a parallel cor- pus. Computational Linguistics, 29(3):349-380.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Parallel strands: A preliminary investigation into mining the web for bilingual text. Machine Translation and the Information Soup",
"authors": [
{
"first": "P",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "72--82",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Resnik. 1998. Parallel strands: A preliminary investi- gation into mining the web for bilingual text. Machine Translation and the Information Soup, pages 72-82.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A study of translation edit rate with targeted human annotation",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Snover",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Linnea",
"middle": [],
"last": "Micciulla",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Makhoul",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of AMTA",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of trans- lation edit rate with targeted human annotation. In Pro- ceedings of AMTA.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "WebMining: An unsupervised parallel corpora web retrieval system",
"authors": [
{
"first": "J",
"middle": [],
"last": "Tom\u00e1s",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "S\u00e1nchez-Villamil",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Lloret",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Casacuberta",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings from the Corpus Linguistics Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Tom\u00e1s, E. S\u00e1nchez-Villamil, L. Lloret, and F. Casacuberta. 2005. WebMining: An unsupervised parallel corpora web retrieval system. In Proceedings from the Corpus Linguis- tics Conference.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Large scale parallel document mining for machine translation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "J",
"middle": [
"M"
],
"last": "Ponte",
"suffix": ""
},
{
"first": "A",
"middle": [
"C"
],
"last": "Popat",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Dubiner",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics, COLING '10",
"volume": "",
"issue": "",
"pages": "1101--1109",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Uszkoreit, J. M. Ponte, A. C. Popat, and M. Dubiner. 2010. Large scale parallel document mining for machine trans- lation. In Proceedings of the 23rd International Confer- ence on Computational Linguistics, COLING '10, pages 1101-1109, Stroudsburg, PA, USA. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Automatic Acquisition of Chinese-English Parallel Corpus from the Web",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Vines",
"suffix": ""
}
],
"year": 2006,
"venue": "Advances in Information Retrieval",
"volume": "",
"issue": "",
"pages": "420--431",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Zhang, K. Wu, J. Gao, and P. Vines. 2006. Automatic Acquisition of Chinese-English Parallel Corpus from the Web. Advances in Information Retrieval, pages 420-431.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Bipartite graph representing the graph neighborhood visited by the crawler. Backlink pages point to pages in bilingual sites (BS) and other pages (forward pages).",
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"text": "Architecture of our crawling strategy to locate bilingual sites.",
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"text": "1 http://www.dmoz.org/ Total of bilingual sites collected by the crawling strategies in a crawl of 100,000 pages.",
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"text": "Results obtained by the BS Detector (Link Predictor + language identification) and the Link Predictor only. on average 2 to 3 pages per site, since the Link Predictor prunes the search space considerably.",
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF2": {
"text": "Algorithm 1 Crawling Policy 1: Input: seeds, BS Detector {seeds : seeds provided by the user, BS Detector: the Bilingual Site Detector.} 2: f rontier = \u2205 {Create the empty frontier.} 3: f rontier.addLinks(seeds) {Add the seeds to the frontier.}",
"content": "<table><tr><td>4: repeat 5: link = f rontier.next() {Retrieve from the frontier the next link to be vis-ited.} 6: page = download(link) {Download the content of the page.} 7: if BS Detector.isRelevant(page) then 8: backlinks = collectBacklins(page) {Collect the backlinks to the given page provided by a search engine API.} 9: f rontier.addLinks(backlinks) {Add the backlinks to the frontier.} 10: end if 11: if link.isBacklink() then 12: outlinks = extractOutlinks(page) {Extract the outlinks of a backlink page.} 13: f rontier.addLinks(outlinks) {Add the outlinks to the frontier.} 14: end if 15: until f rontier.isEmpty()</td></tr></table>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF4": {
"text": "Automatic evaluation metric scores for translation models from out-of-domain data, Web data and combined models. quality of the web crawled data on the test set considered in the experiments is mainly due to the style of the test set. Even though the domain of the crawler is travel and hospitality, the sentences in the test set are more conversational and better matched with Europarl in terms of BLEU metric. On the other hand, the METEOR metric that accounts for the overlapping unigrams is much closer for Europarl and Web data, i.e., the vocabulary coverage is comparable.",
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null
}
}
}
} |