{ "paper_id": "I05-1004", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:26:16.121940Z" }, "title": "Automatic Image Annotation Using Maximum Entropy Model", "authors": [ { "first": "Wei", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "State Key Lab of Intelligent Technology and Systems", "institution": "Tsinghua University", "location": { "postCode": "100084", "settlement": "Beijing", "country": "China" } }, "email": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "", "affiliation": { "laboratory": "State Key Lab of Intelligent Technology and Systems", "institution": "Tsinghua University", "location": { "postCode": "100084", "settlement": "Beijing", "country": "China" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Automatic image annotation is a newly developed and promising technique to provide semantic image retrieval via text descriptions. It concerns a process of automatically labeling the image contents with a pre-defined set of keywords which are exploited to represent the image semantics. A Maximum Entropy Model-based approach to the task of automatic image annotation is proposed in this paper. In the phase of training, a basic visual vocabulary consisting of blob-tokens to describe the image content is generated at first; then the statistical relationship is modeled between the blob-tokens and keywords by a Maximum Entropy Model constructed from the training set of labeled images. In the phase of annotation, for an unlabeled image, the most likely associated keywords are predicted in terms of the blob-token set extracted from the given image. We carried out experiments on a medium-sized image collection with about 5000 images from Corel Photo CDs. The experimental results demonstrated that the annotation performance of this method outperforms some traditional annotation methods by about 8% in mean precision, showing a potential of the Maximum Entropy Model in the task of automatic image annotation.", "pdf_parse": { "paper_id": "I05-1004", "_pdf_hash": "", "abstract": [ { "text": "Automatic image annotation is a newly developed and promising technique to provide semantic image retrieval via text descriptions. It concerns a process of automatically labeling the image contents with a pre-defined set of keywords which are exploited to represent the image semantics. A Maximum Entropy Model-based approach to the task of automatic image annotation is proposed in this paper. In the phase of training, a basic visual vocabulary consisting of blob-tokens to describe the image content is generated at first; then the statistical relationship is modeled between the blob-tokens and keywords by a Maximum Entropy Model constructed from the training set of labeled images. In the phase of annotation, for an unlabeled image, the most likely associated keywords are predicted in terms of the blob-token set extracted from the given image. We carried out experiments on a medium-sized image collection with about 5000 images from Corel Photo CDs. The experimental results demonstrated that the annotation performance of this method outperforms some traditional annotation methods by about 8% in mean precision, showing a potential of the Maximum Entropy Model in the task of automatic image annotation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Last decade has witnessed an explosive growth of multimedia information such as images and videos. However, we can't access to or make use of the relevant information more leisurely unless it is organized so as to provide efficient browsing and querying. As a result, an important functionality of next generation multimedia information management system will undoubtedly be the search and retrieval of images and videos on the basis of visual content.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In order to fulfill this \"intelligent\" multimedia search engines on the world-wideweb, content-based image retrieval techniques have been studied intensively during the past few years. Through the sustained efforts, a variety of state-of-the-art methods employing the query-by-example (QBE) paradigm have been well established. By this we mean that queries are images and the targets are also images. In this manner, visual similarity is computed between user-provided image and database images based on the low-level visual features such as color, texture, shape and spatial relationships. However, two important problems still remain. First, due to the limitation of object recognition and image understanding, semantics-based image segmentation algorithm is unavailable, so segmented region may not correspond to users' query object. Second, visual similarity is not semantic similarity which means that low-level features are easily extracted and measured, but from the users' point of view, they are nonintuitive. It is not easy to use them to formulate the user's needs. We encounter a socalled semantic gap here. Typically the starting point of the retrieval process is the high-level query from users. So extracting image semantics based on the low-level visual features is an essential step. As we know, semantic information can be represented more accurately by using keywords than by using low-level visual features. Therefore, building relationship between associated text and low-level image features is considered to an effective solution to capture the image semantics. By means of this hidden relationship, images can be retrieved by using textual descriptions, which is also called query-by-keyword (QBK) paradigm. Furthermore, textual queries are a desirable choice for semantic image retrieval which can resort to the powerful textbased retrieval techniques. The key to image retrieval using textual queries is image annotation. But most images are not annotated and manually annotating images is a time-consuming, error-prone and subjective process. So, automatic image annotation is the subject of much ongoing research. Its main goal is to assign descriptive words to whole images based on the low-level perceptual features, which has been recognized as a promising technique for bridging the semantic gap between low-level image features and high-level semantic concepts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Given a training set of images labeled with text (e.g. keywords, captions) that describe the image content, many statistical models have been proposed by researchers to construct the relation between keywords and image features. For example, cooccurrence model, translation model and relevance-language model. By exploiting text and image feature co-occurrence statistics, these methods can extract hidden semantics from images, and have been proven successful in constructing a nice framework for the domain of automatic image annotation and retrieval.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose a novel approach for the task of automatic image annotation using Maximum Entropy Model. Though Maximum Entropy method has been successfully applied to a wide range of application such as machine translation, it is not much used in computer vision domain, especially in image auto annotation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper is organized as follows: Section 2 presents related work. Section 3 describes the representation of labeled and unlabeled images, gives a brief introduction to Maximum Entropy Model and then details how to use it for automatically annotating unlabeled images. Section 4 demonstrates our experimental results. Section 5 presents conclusions and a comment for future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recently, many statistical models have been proposed for automatic image annotation and retrieval. The work of associating keywords with low-level visual features can be addressed from two different perspectives.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "This kind of approach usually formulates the process of automatic image annotation as one of supervised classification problems. With respect to this method, accurate annotation information is demanded. That is to say, given a set of training images labeled with semantic keywords, detailed labeling information should be provided. For example, from training samples, we can know which keyword corresponds to which image region or what kind of concept class describes a whole-image. So each or a set of annotated keyword can be considered as an independent concept class, followed by training each class model with manually labeled images, then the model is applied to classify each unlabeled image into a relevant concept class, and finally producing annotation by propagating the corresponding class words to unlabeled images.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation by Keyword Propagation", "sec_num": "2.1" }, { "text": "Wang and Li [8] introduced a 2-D multi-resolution HMM model to automate linguistic indexing of images. Clusters of fixed-size blocks at multiple resolution and the relationships between these clusters is summarized both across and within the resolutions. To annotate the unlabeled image, words of the highest likelihood is selected based on the comparison between feature vectors of new image and the trained concept models. Chang et al [5] proposed content-based soft annotation (CBSA) for providing images with semantic labels using (BPM) Bayesian Point Machine. Starting with labeling a small set of training images, an ensemble of binary classifier for each keyword is then trained for predicting label membership for images. Each image is assigned one keyword vector, with each keyword in the vector assigned a confidence factor. In the process of annotation, words with high confidence are considered to be the most likely descriptive words for the new images. The main practical problem with this kind of approaches is that a large labeled training corpus is needed. Moreover, during the training and application stages, the training set is fixed and not incremented. Thus if a new domain is introduced, new labeled examples must be provided to ensure the effectiveness of such classifiers.", "cite_spans": [ { "start": 12, "end": 15, "text": "[8]", "ref_id": "BIBREF7" }, { "start": 437, "end": 440, "text": "[5]", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Annotation by Keyword Propagation", "sec_num": "2.1" }, { "text": "More recently, there have been some efforts to solve this problem in a more general way. The second approach takes a different strategy which focuses on discovering the statistical links between visual features and words using unsupervised learning methods. During training, a roughly labeled image datasets is provided where a set of semantic labels is assigned to a whole image, but the word-to-region information is hidden in the space of image features and keywords. So an unsupervised learning algorithm is usually adopted to estimate the joint probability distribution of words and image features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation by Statistical Inference", "sec_num": "2.2" }, { "text": "Mori et al [4] were the earliest to model the statistics using a co-occurrence probabilistic model, which predicate the correct probability of associating keywords by counting the co-occurrence of words with image regions generated using a fixed-size blocks. Blocks are vector quantized to form clusters which inherit the whole set of keywords assigned to each image. Then clusters are in turn used to predict the keywords for unlabeled images. The disadvantage is that the model is a little simple and the rough fixed-size blocks are unable to model objects effectively, leading to poor annotation accuracy. Instead of using fixed-size blocks, Barnard et al [1] performed Blobworld segmentation and Normalized cuts to produce semantic meaningful regions. They constructed a hierarchical model via EM algorithm. This model combines both asymmetric clustering model which maps words and image regions into clusters and symmetric clustering model which models the joint distribution of words and regions. Duygulu et al [2] proposed a translation model to map keywords to individual image regions. First, image regions are created by using a segmentation algorithm. For each region, visual features are extracted and then blob-tokens are generated by clustering the features for each region across whole image datasets. Each image can be represented by a certain number of these blob-tokens. Their Translation Model uses machine translation model \u2160 of IBM to annotate a test set of images based on a large number of annotated training images. Another approach using cross-media relevance models (CMRM) was introduced by Jeon et al [3] . They assumed that this could be viewed as analogous to the cross-lingual retrieval problem and a set of key- ..., , ,", "cite_spans": [ { "start": 11, "end": 14, "text": "[4]", "ref_id": "BIBREF3" }, { "start": 659, "end": 662, "text": "[1]", "ref_id": "BIBREF0" }, { "start": 1017, "end": 1020, "text": "[2]", "ref_id": "BIBREF1" }, { "start": 1628, "end": 1631, "text": "[3]", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Annotation by Statistical Inference", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "words { } n w w w ..., ,", "eq_num": ", 2 1" } ], "section": "Annotation by Statistical Inference", "sec_num": "2.2" }, { "text": ", rather than one-to-one correspondence between the blob-tokens and keywords. Here the joint distribution of blob-tokens and words was learned from a training set of annotated images to perform both automatic image annotation and ranked retrieval. Jeon et al [9] introduced using Maximum Entropy to model the fixed-size block and keywords, which gives us a good hint to implement it differently. Lavrenko et al [11] extended the cross-media relevance model using actual continuous-valued features extracted from image regions. This method avoids the clustering and constructing the discrete visual vocabulary stage.", "cite_spans": [ { "start": 259, "end": 262, "text": "[9]", "ref_id": "BIBREF8" }, { "start": 411, "end": 415, "text": "[11]", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Annotation by Statistical Inference", "sec_num": "2.2" }, { "text": "The following Fig. 1 shows the framework for automatic image annotation and keyword-based image retrieval. Given a training dataset of images labeled with keywords. First, we segment a whole image into a collection of sub-images, followed by extracting a set of low-level visual features to form a feature vector to describe the visual content of each region. Second, a visual vocabulary of blob-tokens is generated by clustering all the regions across the whole dataset so that each image can be represented by a number of blob-tokens from a finite set of visual symbols. Third, both textual information and visual information is provided to train the Maximum Entropy model, and the learned model is then applied to automatically generate keywords to describe the semantic content of an unlabeled image based on the low-level features. Consequently, both the users' information needs and the semantic content of images can be represented by textual information, which can resort to the powerful text IR techniques to implement this cross-media retrieval, suggesting the importance of textual information in semantics-based image retrieval. First, what feature sets should be selected to be the most expressive for any image region. Second, how blob-tokens can be generated, that is to say, how can one create such a visual vocabulary of blob-tokens to represent each image in the collection using a number of symbols from this finite set? In our method, we carried out these following two steps: First, segment images into sub-images, Second, extract appropriate features for any sub-images, cluster similar regions by k-means and then use the centroid in each clus-ter as a blob-token. The first step can be employed by either using a segmentation algorithm to produce semantically meaningful units or partitioning the image into fixed-size rectangular grids. Both methods have pros and cons, a general purpose segmentation algorithm may produce semantic regions, but due to the limitation in computer vision and image processing, there are also the problems of erroneous and unreliable region segmentation. The advantage of regular grids is that is does not need to perform complex image segmentation and is easy to be conducted. However, due to rough fixed-size rectangular grids, the extracted blocks are unable to model objects effectively, leading to poor annotation accuracy in our experiment.", "cite_spans": [], "ref_spans": [ { "start": 14, "end": 20, "text": "Fig. 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "The Hierarchical Framework of Automatic Annotation and Retrieval", "sec_num": "3.1" }, { "text": "In this paper, we segment images into a number of meaningful regions using Normalized cuts [6] against using JSEG. Because the JSEG is only focusing on local features and their consistencies, but Ncuts aims at extracting the global impression of an image data. So Ncuts may get a better segmentation result than JSEG. Fig. 2 shows segmentation result using Normalized cuts and JSEG respectively, the left is the original image, the mid and the right are the segmentation result using Ncuts and JSEG respectively. After segmentation, each image region is described by a feature vector formed by HSV histograms and Gabor filters. Similar regions will be grouped together based on k-means clustering to form the visual vocabulary of blob-tokens. Too much clusters may cause data sparseness and too few can not converge. Then each of the labeled and unlabeled images can be described by a number of blob-tokens, instead of the continuous-valued feature vectors. So we can avoid the image data modeling in a high-dimensional and complex feature space.", "cite_spans": [ { "start": 91, "end": 94, "text": "[6]", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 318, "end": 324, "text": "Fig. 2", "ref_id": null } ], "eq_spans": [], "section": "Fig. 2. Segmentation Results using Normalized cuts and JSEG", "sec_num": null }, { "text": "Maximum Entropy Model is a general purpose machine learning and classification framework whose main goal is to account for the behavior of a discrete-valued random process. Given a random process whose output value y may be influenced by some specific contextual information x, such a model is a method of estimating the conditional probability.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Annotation Strategy Based on Maximum Entropy", "sec_num": "3.3" }, { "text": "\u220f = = k j y x fj j x Z x y p 1 ) , ( ) ( 1 ) | ( \u03b1 (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Annotation Strategy Based on Maximum Entropy", "sec_num": "3.3" }, { "text": "In the process of annotation, images are segmented using normalized cuts, every image region is represented by a feature vector consisting of HSV color histogram and the Gabor filters, and then a basic visual vocabulary containing 500 blob-tokens is generated by k-means clustering. Finally, each segmented region is assigned to the label of its closest blob-token. Thus the complex visual contents of images can be represented by a number of blob-tokens. Due to the imbalanced distribution of keywords frequency and the data sparseness problem, the size of the pre-defined keyword vocabulary is reduced from 1728 to 121 keywords, by keeping only the keywords appearing more than 30 times in the training dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Annotation Strategy Based on Maximum Entropy", "sec_num": "3.3" }, { "text": "We If blob-token i b satisfies the context of feature constraints and keyword \"water\" also occurs in image I. In other words, if the color and texture feature components are coordinated with the semantic label 'water', and then the value of the feature function is 1, otherwise 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Annotation Strategy Based on Maximum Entropy", "sec_num": "3.3" }, { "text": "The following Fig. 3 shows the annotation procedure that using MaxEnt captures the hidden relationship between blob-tokens and keywords from a roughly labeled training image sets.", "cite_spans": [], "ref_spans": [ { "start": 14, "end": 20, "text": "Fig. 3", "ref_id": null } ], "eq_spans": [], "section": "The Annotation Strategy Based on Maximum Entropy", "sec_num": "3.3" }, { "text": "In the recent past, many models for automatic image annotation are limited by the scope of the representation. In particular, they fail to exploit the context in the images and words. It is the context in which an image region is placed that gives it meaningful interpretation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fig. 3. Learning the statistics of blob-tokens and words", "sec_num": null }, { "text": "In our annotation procedure, each annotated word is predicted independently by the Maximum Entropy Model, word correlations are not taken into consideration. However, correlations between annotated words are essentially important in predicting relevant text descriptions. For example, the words \"trees\" and \"grass\" are more likely to co-occur than the words \"trees\" and \"computers\". In order to generate appropriate annotations, a simple language model is developed that takes the word-correlation information into account, and then the textual description is determined not only by the model linking keywords and blob-tokens but also by the word-to-word correlation. We simply count the co-occurrence information between words in the predefined textual set to produce a simple word correlation model to improve the annotation accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fig. 3. Learning the statistics of blob-tokens and words", "sec_num": null }, { "text": "We carried out experiments using a mid-sized image collection, comprising about 5,000 images from Corel Stock Photo CDs, 4500 images for training and 500 for testing. The following table 1 shows the results of automatic image annotation using Maximum Entropy. For our training datasets, the visual vocabulary and the pre-defined textual set contain 500 blob-tokens and 121 keywords respectively, so the number of the training", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Analysis", "sec_num": "4" }, { "text": "pairs ( ) j i w b ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Analysis", "sec_num": "4" }, { "text": "is 60500. After the procedure of feature selection, only 9550 pairs left.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Analysis", "sec_num": "4" }, { "text": "For model parameters estimation, there are a few algorithms including Generalized Iterative Scaling and Improved Iterative Scaling which are widely used. Here we use Limited Memory Variable Metric method which has been proved effective for Maximum Entropy Model [10] . Finally, we can get the model linking blob-tokens and keywords, and then the trained model ( ) The following Fig. 4 shows the some of the retrieval results using the keyword 'water' as a textual query.", "cite_spans": [ { "start": 262, "end": 266, "text": "[10]", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 378, "end": 384, "text": "Fig. 4", "ref_id": null } ], "eq_spans": [], "section": "Experiments and Analysis", "sec_num": "4" }, { "text": "The following Fig. 5 and Fig. 6 show the precision and recall of using a se of highfrequency keywords as user queries. We implemented two statistical models to link blob-tokens and keywords. The annotation accuracy is evaluated by using precision and recall indirectly. After posing a keyword query for images, the measure of precision and recall can be defined as follows:", "cite_spans": [], "ref_spans": [ { "start": 14, "end": 20, "text": "Fig. 5", "ref_id": null }, { "start": 25, "end": 31, "text": "Fig. 6", "ref_id": null } ], "eq_spans": [], "section": "Fig. 4. Some of retrieved images using 'water' as a query", "sec_num": null }, { "text": "B A A precision + = C A A recall + = (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fig. 4. Some of retrieved images using 'water' as a query", "sec_num": null }, { "text": "Where A denote the number of relevant images retrieved, B denote the number of irrelevant images retrieved, C denote the number of relevant images not retrieved in the image datasets, and images whose labels containing the query keyword is considered relevant, otherwise irrelevant. [4] in the average precision and recall. Since our model uses the blob-tokens to represent the contents of the image regions and converts the task of automatic image annotation to a process of translating information from visual language (blob-tokens) to textual language (keywords). So Maximum Entropy Model is a natural and effective choice for our task, which has been successfully applied to the dyadic data in which observations are made from two finite sets of objects. But disadvantages also exist. There are two fold problems to be considered. First, since Maximum Entropy is constrained by the equation ( ) ( )", "cite_spans": [ { "start": 283, "end": 286, "text": "[4]", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Fig. 4. Some of retrieved images using 'water' as a query", "sec_num": null }, { "text": "f p f p=", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fig. 4. Some of retrieved images using 'water' as a query", "sec_num": null }, { "text": ", which assumes that the expected value of output of the stochastic model should be the same as the expected value of the training sample. However, due to the unbalanced distribution of keywords frequency in the training subset of Corel data, this assumption will lead to an undesirable problem that common words with high frequency are usually associated with too many irrelevant blob-tokens, whereas uncommon words with low frequency have little change to be selected as annotations for any image regions, consider word \"sun\" and \"apple\" , since both words may be related to regions with \"red\" color and \"round\" shape, but it is difficult to make a decision between the word \"sun\" and \"apple\". However, since \"sun\" is a common word as compared to \"apple\" in the lexical set, the word \"sun\" will definitely used as the annotation for these kind of regions. To address this kind of problems, our future work will mainly focus on the more sophisticated language model to improve the statistics between image features and keywords. Second, the effects of segmentation may also affect the annotation performance. As we know, semantic image segmentation algorithm is a challenging and complex problem, current segmentation algorithm based on the low-level visual features may break up the objects in the images, that is to say, segmented regions do not definitely correspond to semantic objects or semantic concepts, which may cause the Maximum Entropy Model to derive a wrong decision given an unseen image.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fig. 4. Some of retrieved images using 'water' as a query", "sec_num": null }, { "text": "In this paper, we propose a novel approach for automatic image annotation and retrieval using Maximum Entropy Model. Compared to other traditional classical methods, the proposed model gets better annotation and retrieval results. But three main challenges are still remain: 1) Semantically meaningful segmentation algorithm is still not available, so the segmented region may not correspond to a semantic object and region features are insufficient to describe the image semantics. 2) The basic visual vocabulary construction using k-means is only based on the visual features, which may lead to the fact that two different semantic objects with similar visual features fall into the same blob-token. This may degrade the annotation quality. 3) Our annotation task mainly depend on the trained model linking image features and keywords, the spatial context information of image regions and the word correlations are not fully taken into consideration. In the future, more work should be done on image segmentation techniques, clustering algorithms, appropriate feature extraction and contextual information between regions and words to improve the annotation accuracy and retrieval performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "5" } ], "back_matter": [ { "text": "We would like to express our deepest gratitude to Kobus Barnard ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Matching words and pictures", "authors": [ { "first": "K", "middle": [], "last": "Barnard", "suffix": "" }, { "first": "P", "middle": [], "last": "Dyugulu", "suffix": "" }, { "first": "N", "middle": [], "last": "Freitas", "suffix": "" }, { "first": "D", "middle": [], "last": "Forsyth", "suffix": "" }, { "first": "D", "middle": [], "last": "Blei", "suffix": "" }, { "first": "M", "middle": [ "I" ], "last": "Jordan", "suffix": "" } ], "year": 2003, "venue": "Journal of Machine Learning Research", "volume": "3", "issue": "", "pages": "1107--1135", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Barnard, P. Dyugulu, N. de Freitas, D. Forsyth, D. Blei, and M. I. Jordan. Matching words and pictures. Journal of Machine Learning Research, 3: 1107-1135, 2003.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Ojbect recognition as machine translation: Learning a lexicon fro a fixed image vocabulary", "authors": [ { "first": "P", "middle": [], "last": "Duygulu", "suffix": "" }, { "first": "K", "middle": [], "last": "Barnard", "suffix": "" }, { "first": "N", "middle": [], "last": "Freitas", "suffix": "" }, { "first": "D", "middle": [], "last": "Forsyth", "suffix": "" } ], "year": 2002, "venue": "Seventh European Conf. on Computer Vision", "volume": "", "issue": "", "pages": "97--112", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Duygulu, K. Barnard, N. de Freitas, and D. Forsyth. Ojbect recognition as machine translation: Learning a lexicon fro a fixed image vocabulary. In Seventh European Conf. on Computer Vision, 97-112, 2002.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Automatic image annotation and retrieval using cross-media relevance models", "authors": [ { "first": "J", "middle": [], "last": "Jeon", "suffix": "" }, { "first": "V", "middle": [], "last": "Lavrenko", "suffix": "" }, { "first": "R", "middle": [], "last": "Manmatha", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 26 th intl. SIGIR Conf", "volume": "", "issue": "", "pages": "119--126", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Jeon, V. Lavrenko and R. Manmatha. Automatic image annotation and retrieval using cross-media relevance models. In Proceedings of the 26 th intl. SIGIR Conf, 119-126, 2003.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Image-to-word transformation based on dividing and vector quantizing images with words. First International Workshop on Multimedia Intelligent Storage and Retrieval Management", "authors": [ { "first": "Y", "middle": [], "last": "Mori", "suffix": "" }, { "first": "H", "middle": [], "last": "Takahashi", "suffix": "" }, { "first": "R", "middle": [], "last": "Oka", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Mori, H. Takahashi, and R. Oka, Image-to-word transformation based on dividing and vector quantizing images with words. First International Workshop on Multimedia Intelli- gent Storage and Retrieval Management, 1999.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "CBSA: Content-based soft annotation for multimodal image retrieval using bayes point machines", "authors": [ { "first": "Edward", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kingshy", "middle": [], "last": "Goh", "suffix": "" }, { "first": "Gerard", "middle": [], "last": "Sychay", "suffix": "" }, { "first": "Gang", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2003, "venue": "IEEE Transactions on Circuts and Systems for Video Technology Special Issue on Conceptual and Dynamical Aspects of Multimedia Content Descriptions", "volume": "13", "issue": "", "pages": "26--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "Edward Chang, Kingshy Goh, Gerard Sychay and Gang Wu. CBSA: Content-based soft annotation for multimodal image retrieval using bayes point machines. IEEE Transactions on Circuts and Systems for Video Technology Special Issue on Conceptual and Dynamical Aspects of Multimedia Content Descriptions, 13(1): 26-38, 2003.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Normalized cuts and image segmentation", "authors": [ { "first": "J", "middle": [], "last": "Shi", "suffix": "" }, { "first": "J", "middle": [], "last": "Malik", "suffix": "" } ], "year": 2000, "venue": "IEEE Transactions On Pattern Analysis and Machine Intelligence", "volume": "22", "issue": "8", "pages": "888--905", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. shi and J. Malik. Normalized cuts and image segmentation. IEEE Transactions On Pat- tern Analysis and Machine Intelligence, 22(8): 888-905, 2000.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A maximum entropy approach to natural language processing", "authors": [ { "first": "A", "middle": [], "last": "Berger", "suffix": "" }, { "first": "S", "middle": [], "last": "Pietra", "suffix": "" }, { "first": "V", "middle": [], "last": "Pietra", "suffix": "" } ], "year": 1996, "venue": "Computational Linguistics", "volume": "", "issue": "", "pages": "39--71", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Berger, S. Pietra and V. Pietra. A maximum entropy approach to natural language proc- essing. In Computational Linguistics, 39-71, 1996.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Automatic linguistic indexing of pictures by a statistical modeling approach", "authors": [ { "first": "J", "middle": [], "last": "Li", "suffix": "" }, { "first": "J", "middle": [ "A" ], "last": "Wang", "suffix": "" } ], "year": 2003, "venue": "IEEE Transactions on PAMI", "volume": "25", "issue": "10", "pages": "175--1088", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Li and J. A. Wang. Automatic linguistic indexing of pictures by a statistical modeling approach. IEEE Transactions on PAMI, 25(10): 175-1088, 2003.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Using maximum entropy for automatic image annotation", "authors": [ { "first": "R", "middle": [], "last": "Jiwoon Jeon", "suffix": "" }, { "first": "", "middle": [], "last": "Manmatha", "suffix": "" } ], "year": 2004, "venue": "proceedings of third international conference on image and video retrieval", "volume": "", "issue": "", "pages": "24--31", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiwoon Jeon, R. Manmatha. Using maximum entropy for automatic image annotation. In proceedings of third international conference on image and video retrieval, 24-31, 2004.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A comparison of algorithms for maximum entropy parameter estimation", "authors": [ { "first": "Robert", "middle": [], "last": "Malouf", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 6 th Workshop on Computational Language Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Malouf. A comparison of algorithms for maximum entropy parameter estimation. In Proceedings of the 6 th Workshop on Computational Language Learning, 2003.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A model for learning the semantics of pictures", "authors": [ { "first": "V", "middle": [], "last": "Lavrenko", "suffix": "" }, { "first": "R", "middle": [], "last": "Manmatha", "suffix": "" }, { "first": "J", "middle": [], "last": "Jeon", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 16 th Annual Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "V. Lavrenko, R. Manmatha and J. Jeon. A model for learning the semantics of pictures. In Proceedings of the 16 th Annual Conference on Neural Information Processing Systems, 2004.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "type_str": "figure", "text": "is related to the set of blob-tokens { } n b b b" }, "FIGREF1": { "num": null, "uris": null, "type_str": "figure", "text": "Hierarchical Framework of Automatic Annotation and Retrieval learning correlations between blob-tokens and textual annotations applying correlations to generate annotations for unlabeled images 3.2 Image Representation and Pre-processing A central issue in content-based image annotation and retrieval is how to describe the visual information in a way compatible with human visual perception. But until now, no general framework is proposed. For different tasks and goals, different low-level features are used to describe and analyze the visual content of images. On the whole, there are two kinds of interesting open questions remain unresolved." }, "FIGREF2": { "num": null, "uris": null, "type_str": "figure", "text": "co-occurrence statistics of blob-tokens i b and keywords j w , where FC denote the context of feature constraints for each blob-token. The following example represents the co-occurrence of the blob-token * b and the keyword \"water\" in an image I." }, "FIGREF3": { "num": null, "uris": null, "type_str": "figure", "text": "the feasibility and effectiveness of Maximum Entropy model, we have implemented the co-occurrence model as one of the baselines whose conditional probability ( ) co-occurrence of i b and j w , j n denote the occurring number of j w in the total N words." }, "FIGREF4": { "num": null, "uris": null, "type_str": "figure", "text": "Precision of retrieval using some high-frequency keywords Recall of retrieval using some high-frequency keywords" }, "TABREF0": { "text": "Automatic image annotation results", "html": null, "type_str": "table", "content": "
ImagesOriginal AnnotationAutomatic Annotation
sun city sky mountainSun sky mountain clouds
flowers tulips mountain sky Flowers sky trees grass
tufa snow sky grasssnow sky grass stone
polar bear snow postbear snow sky rocks
", "num": null }, "TABREF1": { "text": "Experimental results with average precision and meanThe above experimental results intable 2 show that our method outperforms the Co-occurrence model", "html": null, "type_str": "table", "content": "
MethodMean precisionMean recall
Co-occurrence0.110.18
Maximum Entropy0.170.25
", "num": null } } } }