Search my stuff

Monday, December 13, 2010

Reading #25: A descriptor for large scale image retrieval based on sketched feature lines

Comments:
Paco

Summary:
So you need to find an image online... do you describe it in words? What if you describe the wrong parts of the image because those are what you deem important? How do you quantify an image's importance? You could draw the image, but then why do you NEED it if you can draw?!

Ok, so obviously sketching to search for images would be cool. So cool in fact that the authors of this paper developed a system that does it. Their system is designed to query beastly databases containing millions of images. Actual images and the user's input sketch are preprocessed the same way which allows for matching based on similar descriptors.


Database image descriptors are cached in memory, and clusters are created based on similar colors. Searches take up to 3.5 seconds.

Discussion:
Awesome! Like Paco and I were talking about, this idea could be used to teach both users and the system words in different languages. If you draw simple items such as a tree or a cat, then you could also provide the written description (word) in your native language. Once you select a result, the system could then "learn" that your word describes that image and be able to employ cross-language searches in the future.

2 comments:

  1. Good idea! The system can learn how to describe the image by input sketches. The latest search technique is image search by a word. I think the input sketch makes the big progress in this direction. Using sketch and description can make efficient retrieval in image search.

    ReplyDelete
  2. Awesome! let's make such system, chris! haha

    ReplyDelete