{
"paper_id": "I11-1042",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:31:00.902791Z"
},
"title": "Quality-biased Ranking of Short Texts in Microblogging Services",
"authors": [
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {
"postCode": "100084",
"settlement": "Beijing",
"country": "China"
}
},
"email": "aihuang@tsinghua.edu.cn"
},
{
"first": "Yi",
"middle": [],
"last": "Yang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Beihang University",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Zhu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {
"postCode": "100084",
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The abundance of user-generated content comes at a price: the quality of content may range from very high to very low. We propose a regression approach that incorporates various features to recommend short-text documents from Twitter, with a bias toward quality perspective. The approach is built on top of a linear regression model which includes a regularization factor inspired from the content conformity hypothesis-documents similar in content may have similar quality. We test the system on the Edinburgh Twitter corpus. Experimental results show that the regularization factor inspired from the hypothesis can improve the ranking performance and that using unlabeled data can make ranking performance better. Comparative results show that our method outperforms several baseline systems. We also make systematic feature analysis and find that content quality features are dominant in short-text ranking.",
"pdf_parse": {
"paper_id": "I11-1042",
"_pdf_hash": "",
"abstract": [
{
"text": "The abundance of user-generated content comes at a price: the quality of content may range from very high to very low. We propose a regression approach that incorporates various features to recommend short-text documents from Twitter, with a bias toward quality perspective. The approach is built on top of a linear regression model which includes a regularization factor inspired from the content conformity hypothesis-documents similar in content may have similar quality. We test the system on the Edinburgh Twitter corpus. Experimental results show that the regularization factor inspired from the hypothesis can improve the ranking performance and that using unlabeled data can make ranking performance better. Comparative results show that our method outperforms several baseline systems. We also make systematic feature analysis and find that content quality features are dominant in short-text ranking.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "More and more user-generated data are emerging on personal blogs, microblogging services (e.g. Twitter), social and e-commerce websites. However, the abundance of user-generated content comes at a price: there may be high-quality content, but also much spam content such as advertisements, selfpromotion, pointless babbles, or misleading information. Therefore, assessing the quality of information has become a challenging problem for many tasks such as information retrieval, review mining (Lu et al., 2010) , and question answering (Agichtein et al., 2008) .",
"cite_spans": [
{
"start": 492,
"end": 509,
"text": "(Lu et al., 2010)",
"ref_id": "BIBREF16"
},
{
"start": 535,
"end": 559,
"text": "(Agichtein et al., 2008)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we focus on predicting the quality of very short texts which are obtained from Twitter. Twitter is a free social networking and microblogging service that enables its users to send and read other users' updates, known as \"Tweets\". Each tweet has up to 140 characters in length. With more than 200 million users (March 2011), Twitter has become one of the biggest mass media to broadcast and digest information for users. It has exhibited advantages over traditional news agencies in the success of reporting news more timely, for instance, in reporting the Chilean earthquake of 2010 (Mendoza et al., 2010) . A comparative study (Teevan et al., 2011) shows that queries issued to Twitter tend to seek more temporally relevant information than those to general web search engines.",
"cite_spans": [
{
"start": 599,
"end": 621,
"text": "(Mendoza et al., 2010)",
"ref_id": "BIBREF0"
},
{
"start": 644,
"end": 665,
"text": "(Teevan et al., 2011)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Due to the massive information broadcasted on Twitter, there are a huge amount of searches every day and Twitter has become an important source for seeking information. However, according to the Pear Analytics (2009) report on 2000 sample tweets, 40.5% of the tweets are pointless babbles, 37.5% are conversational tweets, and only 3.6% are news (which are most valuable for users who seek news information). Therefore, when a user issues a query, recommending tweets of good quality has become extremely important to satisfy the user's information need: how can we retrieve trustworthy and informative posts to users?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, we must note that Twitter is a social networking service that encourages various content such as news reports, personal updates, babbles, conversations, etc. In this sense, we can not say which content has better quality without considering the value to the writer or reader. For instance, for a reader, the tweets from his friends or who he follows may be more desirable than those from others, whatever the quality is. In this paper, we have a special focus on finding tweets on news topics when we construct the evaluation datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose a method of incorporating various features for quality-biased tweet recommendation in response to a query. The major contributions of this paper are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose an approach for quality-biased ranking of short documents. Quality-biased is referred to the fact that we explore various features that may indicate quality. We also present a complete feature analysis to show which features are most important for this problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose a content conformity hypothesis, and then formulate it into a regularization factor on top of a regression model. The performance of the system with such a factor is boosted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 It is feasible to plug unlabeled data into our approach and leveraging unlabeled data can enhance the performance. This characteristics is appealing for information retrieval tasks since only a few labeled data are available in such tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is organized as follows: in Section 2 we survey related work. We then formulate our problem in Section 3 and present the hypothesis in Section 4. Various features are presented in Section 5. The dataset and experiment results are presented in Section 6 and Section 7, respectively. We summarize this work in Section 8.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Quality prediction has been a very important problem in many tasks. In review mining, quality prediction has two lines of research: one line is to detect spam reviews (Jindal and Liu, 2008) or spam reviewers (Lim et al., 2010) , which is helpful to exclude misleading information; the other is to identify high-quality reviews, on which we will focus in this survey. Various factors and contexts have been studied to produce reliable and consistent quality prediction. Danescu-Niculescu-Mizil et al. (2009) stud-ied several factors on helpfulness voting of Amazon product reviews. Ghose and Ipeirotis (2010) studied several factors on assessing review helpfulness including reviewer characteristics, reviewer history, and review readability and subjectivity. Lu et al. (2010) proposed a linear regression model with various social contexts for review quality prediction. The authors employed author consistency, trust consistency and co-citation consistency hypothesis to predict more consistently. studied three factors, i.e., reviewer expertise, writing style, and timeliness, and proposed a non-linear regression model with radial basis functions to predict the helpfulness of movie reviews. Kim et al. (2006) used SVM regression with various features to predict review helpfulness.",
"cite_spans": [
{
"start": 167,
"end": 189,
"text": "(Jindal and Liu, 2008)",
"ref_id": "BIBREF19"
},
{
"start": 208,
"end": 226,
"text": "(Lim et al., 2010)",
"ref_id": "BIBREF20"
},
{
"start": 469,
"end": 506,
"text": "Danescu-Niculescu-Mizil et al. (2009)",
"ref_id": null
},
{
"start": 759,
"end": 775,
"text": "Lu et al. (2010)",
"ref_id": "BIBREF16"
},
{
"start": 1195,
"end": 1212,
"text": "Kim et al. (2006)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Quality Prediction",
"sec_num": "2.1"
},
{
"text": "Finding high-quality content and reliable users is also very important for question answering. Agichtein et al. (2008) proposed a classification framework of estimating answer quality. They studied content-based features (e.g. the answer length) and usage-based features derived from question answering communities. Jeon et al. 2006used nontextual features extracted from the Naver Q&A service to predict the quality of answers. Bian et al. (2009) proposed a mutual reinforcement learning framework to simultaneously predict content quality and user reputation. Shah and Pomerantz (2010) proposed 13 quality criteria for answer quality annotation and then found that contextual information such as a user's profile, can be critical in predicting the quality of answers.",
"cite_spans": [
{
"start": 95,
"end": 118,
"text": "Agichtein et al. (2008)",
"ref_id": "BIBREF1"
},
{
"start": 429,
"end": 447,
"text": "Bian et al. (2009)",
"ref_id": "BIBREF2"
},
{
"start": 562,
"end": 587,
"text": "Shah and Pomerantz (2010)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Quality Prediction",
"sec_num": "2.1"
},
{
"text": "However, the task we address in this paper is quite different from previous problems. First, the document to deal with is very short. Each tweet has up to 140 characters. Thus, we are going to investigate those factors that influence the quality of such short texts. Second, as mentioned, high-quality information on Twitter (e.g., news) is only a very small proportion. Thus, how to distill high quality content from majority proportions of low-quality content may be more challenging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quality Prediction",
"sec_num": "2.1"
},
{
"text": "Twitter is of high value for both personal and commercial use. Users can post personal updates, keep tight contact with friends, and obtain timely information. Companies can broadcast latest news to and interact with customers, and collect business intelligence via opinion mining. Under this background, there has been a large body of novel applications on Twitter, including social networking mining (Kwark et al., 2010) , real time search 1 , sentiment analysis 2 , detecting influenza epidemics (Culotta, 2010) , and even predicting politics elections (Tumasjan et al., 2010) .",
"cite_spans": [
{
"start": 402,
"end": 422,
"text": "(Kwark et al., 2010)",
"ref_id": null
},
{
"start": 499,
"end": 514,
"text": "(Culotta, 2010)",
"ref_id": "BIBREF10"
},
{
"start": 556,
"end": 579,
"text": "(Tumasjan et al., 2010)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Novel Applications on Twitter",
"sec_num": "2.2"
},
{
"text": "As Twitter has shown to report news more timely than traditional news agencies, detecting tweets of news topic has received much attention. Sakaki et al. (2010) proposed a real-time earthquake detection framework by treating each Twitter user as a sensor. addressed the problem of detecting new events from a stream of Twitter posts and adopted a method based on localitysensitive hashing to make event detection feasible on web-scale corpora. To facilitate fine-grained information extraction on news tweets, presented a work on semantic role labeling for such texts. Corvey et al. (2010) proposed a work for entity detection and entity class annotation on tweets that were posted during times of mass emergency. Ritter et al. (2010) proposed a topic model to detect conversational threads among tweets.",
"cite_spans": [
{
"start": 140,
"end": 160,
"text": "Sakaki et al. (2010)",
"ref_id": "BIBREF5"
},
{
"start": 569,
"end": 589,
"text": "Corvey et al. (2010)",
"ref_id": "BIBREF21"
},
{
"start": 714,
"end": 734,
"text": "Ritter et al. (2010)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Novel Applications on Twitter",
"sec_num": "2.2"
},
{
"text": "Since a large amount of tweets are posted every day, ranking strategies is extremely important for users to find information quickly. Current ranking strategy on Twitter considers relevance to an input query, information recency (the latest tweets are preferred), and popularity (the retweet times by other users). The recency information, which is useful for real-time web search, has also been explored by Dong et al. (2010) who used fresh URLs present in tweets to rank documents in response to recency sensitive queries. Duan et al. (2010) proposed a ranking SVM approach to rank tweets with various features.",
"cite_spans": [
{
"start": 408,
"end": 426,
"text": "Dong et al. (2010)",
"ref_id": "BIBREF9"
},
{
"start": 525,
"end": 543,
"text": "Duan et al. (2010)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Novel Applications on Twitter",
"sec_num": "2.2"
},
{
"text": "Given a set of queries",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation and Methodology",
"sec_num": "3"
},
{
"text": "Q = {q 1 , q 2 , \u2022 \u2022 \u2022 , q n }, for each query q k , we have a set of short documents D k = {d 1 k , d 2 k , \u2022 \u2022 \u2022 }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation and Methodology",
"sec_num": "3"
},
{
"text": "which are retrieved by our builtin search engine. The document set D k is partially labeled, i.e., a small portion of documents in D k were annotated with a category set C={1, 2, 3, 4, 5} where 5 means the highest quality and 1 lowest. Therefore, we denote",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation and Methodology",
"sec_num": "3"
},
{
"text": "D k = D U k \u222a D L k , where D U k",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation and Methodology",
"sec_num": "3"
},
{
"text": "indicates the unlabeled documents, and D L k the labeled documents. Each document in D k is represented as a feature vector,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation and Methodology",
"sec_num": "3"
},
{
"text": "d i = (x 1 , x 2 , \u2022 \u2022 \u2022 , x m )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation and Methodology",
"sec_num": "3"
},
{
"text": "where m is the total number of features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation and Methodology",
"sec_num": "3"
},
{
"text": "The learning task is to train a mapping function f (D) : D \u2192 C, to predict the quality label of a document given a query q. We use a linear function f (d) = w T d for learning and where w is the weight vector. Formally, we define an objective function as follows to guide the learning process:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation and Methodology",
"sec_num": "3"
},
{
"text": "\u0398(w) = 1 n n \u2211 k=1 1 | D L k | \u2211 d i \u2208D L k \u2113(w T d i ,\u0177 i ) + \u03b1w T w",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation and Methodology",
"sec_num": "3"
},
{
"text": "(1) where \u2113(., .) is the loss function that measures the difference between a predicted quality f",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation and Methodology",
"sec_num": "3"
},
{
"text": "(d i ) = w T d i and the labelled quality\u0177 i , D L",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation and Methodology",
"sec_num": "3"
},
{
"text": "k is the labeled documents for query q k ,\u0177 i is the quality label for document d i , n is the total number of queries, and \u03b1 is a regularization parameter for w. The loss function used in this work is the square error loss, as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation and Methodology",
"sec_num": "3"
},
{
"text": "\u2113(w T d i , y i ) = (w T d i \u2212\u0177 i ) 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation and Methodology",
"sec_num": "3"
},
{
"text": "(2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation and Methodology",
"sec_num": "3"
},
{
"text": "It's easy to see that this problem has a closed-form solution, as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation and Methodology",
"sec_num": "3"
},
{
"text": "w = arg min w \u0398(w) = ( N l \u2211 i=1 d i d i T + \u03b1N l I) \u22121 N l \u2211 i=1\u0177 i d i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation and Methodology",
"sec_num": "3"
},
{
"text": "(3) where I is an identity matrix of size m (the dimension of feature vector), and N l is the total number of labeled documents in all the queries. As mentioned, there are a large number of documents retrieved for each query while we only sample a small number of documents for manual annotation. Thus there are much more unlabeled documents yet to be utilized.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation and Methodology",
"sec_num": "3"
},
{
"text": "To make quality prediction more consistent and to utilize the unlabeled data, we propose the content conformity hypothesis which assumes that the quality of documents similar in content should be close to each other. This hypothesis can be formulated as a regularization factor in the objective, as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content Conformity Hypothesis",
"sec_num": "4"
},
{
"text": "\u0398 1 (w) = \u0398(w)+\u03b2 n \u2211 k=1 \u2211 di, dj \u2208D k \u2227IsSim(di,dj ) (w T d i \u2212 w T d j ) 2 (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content Conformity Hypothesis",
"sec_num": "4"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content Conformity Hypothesis",
"sec_num": "4"
},
{
"text": "IsSim(d i , d j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content Conformity Hypothesis",
"sec_num": "4"
},
{
"text": "is a predicate asserting that two documents are similar, and \u03b2 is an empirical parameter. Note that D k is usually all labeled data but it may also include unlabeled documents for query q k . In this way, we can utilize the unlabeled documents as well as the labeled ones. There are various ways to determine whether two documents of the same query are similar. One way is to use TF*IDF cosine similarity to find similar documents with a threshold, and another way is to use clustering where two documents in the same cluster are similar. We use the first means in this paper and leave the second for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content Conformity Hypothesis",
"sec_num": "4"
},
{
"text": "To obtain the closed-form solution of Eq. 4, we define an auxiliary matrix A = (a ij ) where each a ij is 1 if document d i is similar to document d j for some query. Then, Eq. 4 can be re-written as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content Conformity Hypothesis",
"sec_num": "4"
},
{
"text": "\u0398 1 (w) = \u0398(w) + \u03b2 \u2211 i