Description

Handwritten keyword spotting is the task of detecting query words in handwritten document image collections without involving a traditional OCR step. Recently, handwritten word spotting has attracted the attention of the research community in the field of document image analysis and recognition since it has been proved a feasible solution for indexing and retrieval of handwritten documents in the case that OCR-based methods fail to deliver proper results.

This proposal, within the framework of the upcoming ICFHR 2016 conference, is a joint effort between the organisers of ICFHR 2014 H-KWS Competition and the ICDAR2015 Competition on KWS aiming to set up an evaluation framework for benchmarking the two distinct approaches for keyword spotting, namely the Query by Example (QbE) and the Query by String (QbS) case. This distinction is the main motivation for hosting under the proposed competition two different tracks which correspond to the aforementioned keyword spotting variation.

Clearly each of these variations of the KWS problem statement has its own difficulty degree and application targets. For instance, QbS is mandatory for applications involving large-scale handwritten image indexing and search under the precision-recall trade-off model. In this case, given the scale, it can be very advantageous to use training-based KWS. Other kind of applications involve assisting human transcribers by allowing them to find words in a document which have a shape similar to a word or part of a word (perhaps one which the transcriber is not sure how to transcribe when it appears for the first time). In such applications, a training-free QbE system is most appropriate.

Although QbS and QbE address fundamental different problems they are both unified at the technical level since they may both either have dependencies of segmentation (segmentation-based) or not (segmentation-free) and they may both either involve training of data (supervised) or not (unsupervised). All alternatives will be examined in the proposed competition which makes it different compared to previously organised efforts.

Last but not least, another added component of the proposed competition relies upon new challenges that will be met due to newly appearing datasets.

Tracks

The competition will comprise two distinct tracks:

Track I: Query-by-Example (QbE)

The keywords will be given as image patches. Participants will have to provide a list of bounding boxes, sorted by confidence, indicating the image regions where the query keywords are spotted. Average Precision will be computed based on the overlapping area between the proposed matches and the ground truth.

Track II: Query-by-String (QbS)

The keywords will be given as text strings. Participants will have to provide a list of bounding boxes, sorted by confidence, indicating the image regions where the query keywords are spotted. Average Precision will be computed based on the overlapping area between the proposed matches and the ground truth.

For both of the tracks, the following assignments should be considered:

  • Segmentation-based: The correct bounding boxes of each word in the test images will be provided to the participants.
  • Segmentation-free: Nor word, neither line or text-block segmentation will be available for test images; just the raw document images will be provided to participants.
  • Training data: Training data will consist on transcribed line images, correctly extracted from documents of the same collection of the test documents. Word bounding boxes will be also provided, at least, for a fraction of the training images. Additional text-only training data will possibly be provided to support systems which rely on lexical and language modeling. The amount of training data used by entrant systems will be taken into account for the final score of each system.

The H-KWS 2016 dataset will contain a moderately large subset of one of the large collections considered in the READ project (Recognition and Enrichment of Archival Documents, ref: 674943).

Several word images which correspond to various keywords will be manually marked in the dataset for building the corresponding ground truth. After the registration of each participant, part of the dataset along with the associated ground truth information will be provided so that the participant becomes familiar with the data at hand. The expected deliverable of the registered participants will contain the produced ranked lists for the required queries. The performance evaluation method will be based upon established evaluation measures frequently encountered in the multimedia retrieval literature. After the completion of the competition, the testing dataset (including the ground truth) will become publicly available.

Schedule

  • Feb 22, 2016 Registration opens.
  • Apr 1, 2016 Initial subset of the training and the validation data, along with auxiliary scripts and test tools will be made available for registered participants.
  • Jun 12, 2016 Registration closes. No further participants will be accepted.
  • Jun 13, 2016 Release of the test dataset to the participants. The rest of the training data will start to be released from this date.
  • Jun 29, 2016 Deadline for submitting the results on the test set.
  • Oct 23-26, 2016 Winners and final ranking of all teams will be made public at the ICFHR 2016 conference.

Organizers

Please, do not hesitate to write to any of us with your questions, concerns or comments about the competition.

  • Dr. Basilis Gatos2, bgat_AT_iit_DOT_demokritos_DOT_gr
  • Dr. Ioannis Pratikakis1, ipratika_AT_ee_DOT_duth_DOT_gr
  • Joan Puigcerver3, joapuipe_AT_prhlt_DOT_upv_DOT_es
  • Dr. Alejandro H. Toselli3, ahector_AT_prhlt_DOT_upv_DOT_es
  • Dr. Enrique Vidal3, evidal_AT_prhlt_DOT_upv_DOT_es
  • Dr. Konstantinos Zagoris1, kzagoris_AT_ee_DOT_duth_DOT_gr
1Visual Computing Group, Department of Electrical and Computer Engineering, DUTh
2Institute of Informatics and Telecommunications, NCSR "Demokritus"
3Pattern Recognition and Human Language Technologies, UPV