Duración: 1 enero 2009 hasta 31 diciembre 2011
Financiado por: referencia TIN2008-04571

Image and video retrieval involves searching and retrieving images from a large database of digital images and videos. Historically the image and video retrieval has been Context-Based, the user has to provide an image or video as an example in order to get similar results. However such examples are sometimes difficult to provide or the process to provide them are not completely comfortable from the user point of view. The main goal of the present image and video retrieval systems is to find the suitable link between the textual description of what the user wants and the semantic content of the multimedia (image and video) information. Such challenging goal is the main focus of our research work in this area.

The precision of the Content-Based Image Retrieval systems has been improved during the last years but still it is not enough from a realistic and practical point of view. On the other hand, the user still prefers to use text queries instead of submitting an image as an example of the desired results. This last problem entails some sort of complexity in order to bridge the gap between the semantic concept of the text query and the content of the images to retireve.

Furthermore, the system precision can be clearly boosted using the user’s supervision by means of the human-computer interaction methodology. This interaction requires successful approaches to be correctly implemented.

This research project proposes two different goals. First, to design a base CBIR system that improves the state-of-the-art results by means of new image descriptors, new distance functions and using text queries. Second, to use the knowledge of the user to iteratively improve the result of the image search using relevance feedback. More details.

Miembros