Loading

One Click Intent Image Search
Pandav Anand1, Muke Sonal2, Kore Sudarshan3, Sangade Komal4

1Pandav Anand, Government College of Engineering and research, Awsari-Pune (Maharashtra), India.
2Muke Sonal, Government College of Engineering and research, Awsari Pune (Maharashtra), India.
3Kore Sudarshan, Government College of Engineering and research, Awsari-Pune (Maharashtra), India.
4Sangade Komal, Government College of Engineering and research, Awsari-Pune (Maharashtra), India.

Manuscript received on 20 January 2015 | Revised Manuscript received on 28 January 2015 | Manuscript published on 30 January 2015 | PP: 52-55 | Volume-3 Issue-6, January 2015 | Retrieval Number: F1307013615/2015©BEIESP
Open Access | Ethics and Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: Web-scale image search engines (e.g. Google Image Search, Bing Image Search) mostly rely on surrounding text features. It is difficult for them to interpret users’ search intention only by query keywords and this leads to ambiguous and noisy search results which are far from satisfactory. It is important to use visual information in order to solve the ambiguity in text-based image retrieval. In this paper, we propose a novel Internet image search approach. It only requires the user to click on one query image with the minimum effort and images from a pool retrieved by text-based search are re-ranked based on both visual and textual content. Our key contribution is to capture the users’ search intention from this one-click query image. Many commercial Internet scale image search engines use only keywords as queries. Users type query keywords in the hope of finding a certain type of images. The search engine returns thousands of images ranked by the keywords extracted from the surrounding text. It is well known that text-based image search suffers from the ambiguity of query keywords. The keywords provided by users tend to be short. For example, the average query length of the to 1, 000 queries of pic search is 1.368 words, and 97% of them contain only one or two words . They cannot describe the content of images accurately. The search results are noisy and consist of images with quite different semantic meanings. In order to solve the ambiguity, additional information has to be used to capture users’ search intention. One way is text-based keyword expansion, making the textual description of the query more detailed. Existing linguisticallyrelated methods find either synonyms or other linguistic-related words from thesaurus, or find words frequently co-occurring with the query
Keyword: We do believe that adding visual information to image search is important. However, the interaction has to be as simple as possible. The absolute minimum is One-Click.

Scope of the Article: Signal and Image Processing