Current state-of-the-art text transcription products rely on technology for isolated character recognition (OCR) developed in the last decade. More recent OCR research systems and prototypes still require prior character segmentation more or less explicitly. But character segmentation is just impossible in unconstrained handwritten text images, like those encountered in most old documents. Therefore, new, segmentation-free off-line Handwritten Text Recognition (HTR) technology is required to approach the kind of transcriptions we are interested in. The state of the art in off-line HTR research systems is still very far from offering directly usable accuracy, and the only way to make use of this technology is to manually post-edit HTR transcription results. Given the high error rates involved, this is not practical or acceptable by professional users. These facts have lead us to propose the development of computer-assisted solutions based on novel approaches for multimodal interaction and adaptive learning.
- Main objective: to develop advanced techniques and multimodal interfaces for the transcription of handwritten document images, following an interactive-predictive approach
- To develop advanced methods for handwritten text preprocessing
- To develop interactive techniques for handwritten text recognition and search
- To develop and assess multimodal user interaction techniques using on-line handwritten text recognition and pen/touch gestures
- To develop adaptive learning techniques for handwritten text recognition
- To define standard communication protocols that enable to extend the platform with new recognition engines and to adapt the interaction to the capabilities of each recognizer