Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDIMT-1.1
Paper Title Semantic-based Sentence Recognition in Images Using Bimodal Deep Learning
Authors Yi Zheng, Boston University, United States; Qitong Wang, Virginia Tech, United States; Margrit Betke, Boston University, United States
SessionIMT-1: Computational Imaging Learning-based Models
LocationArea J
Session Time:Tuesday, 21 September, 08:00 - 09:30
Presentation Time:Tuesday, 21 September, 08:00 - 09:30
Presentation Poster
Topic Computational Imaging Methods and Models: Learning-Based Models
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract The accuracy of computer vision systems that understand sentences in images with text can be improved when semantic information about the text is utilized. Nonetheless, the semantic coherence within a region of text in natural or document images is typically ignored by state-of-the-art systems, which identify isolated words or interpret text word by word. However, when analyzed together, seemingly isolated words may be easier to recognize. On this basis, we propose a novel “Semantic-based Sentence Recognition” (SSR) deep learning model that reads the text in images with the help of understanding context. SSR consists of a Word Ordering and GroupingAlgorithm (WOGA) to find sentences in images and a Sequence-to-Sequence Recognition Correction (SSRC) model to extract semantic information in these sentences to improve their recognition. To show the effectiveness and generality of SSR in recognizing text, we present experiments with three notably distinct datasets, two of which we created ourselves. They respectively contain scanned catalog images of interior designs and photographs of protesters with hand-written signs. Our results show that SSR statistically significantly outperforms a baseline method that uses state-of-the-art single-word-recognition techniques on these three datasets. By successfully combining both computer vision and natural language processing methodologies, we reveal the important opportunity bi-modal deep learning can provide in addressing a task that was previously considered a single-modality computer vision task.