The normal modes for three to seven particle two-dimensional (2D) dust clusters in a complex plasma are investigated using an N-body simulation. The ion wakefield downstream of each particle is shown to induce coupling between horizontal and vertical modes. The rules of mode coupling are investigated by classifying the mode eigenvectors employing the Bessel and trigonometric functions indexed by order integers (m, n). It is shown that coupling only occurs between two modes with the same m and that horizontal modes having a higher shear contribution exhibit weaker coupling. Three types of resonances are shown to occur when two coupled modes have the same frequency. Discrete instabilities caused by both the first and third type of resonances are verified and instabilities caused by the third type of resonance are found to induce melting. The melting procedure is observed to go through a two-step process with the solid-liquid transition closely obeying the Lindemann criterion.
The recognition performance of optical character recognition (OCR) models can be sub-optimal when document images suffer from various degradations. Supervised deep learning methods for image enhancement can generate high-quality enhanced images. However, these methods demand the availability of corresponding clean images or ground truth text. Sometimes this requirement is difficult to fulfill for real-world noisy documents. For instance, it can be challenging to create paired noisy/clean training datasets or obtain ground truth text for noisy point-of-sale receipts and invoices. Unsupervised methods have been explored in recent years to enhance images in the absence of ground truth images or text. However, these methods focus on enhancing natural scene images. In the case of document images, preserving the readability of text in the enhanced images is of utmost importance for improved OCR performance. In this work, we propose a modified architecture to the CycleGAN model to improve its performance in enhancing document images with better text preservation. Inspired by the success of CNN-BiLSTM combination networks in text recognition models, we propose modifying the discriminator network in the CycleGAN model to a combined CNN-BiLSTM network for better feature extraction from document images during classification by the discriminator network. Results indicate that our proposed model not only leads to better preservation of text and improved OCR performance over the CycleGAN model but also achieves better performance than the classical unsupervised image pre-processing techniques like Sauvola and Otsu.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.