In this paper we propose a smoothing turbo equalizer based on the expectation propagation (EP) algorithm with quite improved performance compared to the Kalman smoother, at similar complexity. In scenarios where high-order modulations or/and large memory channels are employed, the optimal BCJR algorithm is computationally unfeasible. In this situation, lowcost but suboptimal solutions, such as the linear minimum mean square error (LMMSE), are commonly used. Recently, EP has been proposed as a tool to improve the Kalman smoothing performance. In this paper we review these solutions to apply the EP at the smoothing level, rather than at the forward and backwards stages. Also, we better exploit the information coming from the channel decoder in the turbo equalization schemes. With these improvements we reduce the computational complexity, speed up convergence and outperform previous approaches. We included some simulation results to show the robust behavior of the proposed method regardless of the scenario, and its improvement in terms of performance in comparison with other EP-based solutions in the literature.
In this paper we address the problem of offline handwritten text recognition (HTR) in historical documents when few labeled samples are available and some of them contain errors in the train set. Our three main contributions are: first, we analyze how to perform transfer learning (TL) from a massive database to a smaller historical database, analyzing which layers of the model need fine-tuning. Second, we analyze methods to efficiently combine TL and data augmentation (DA). Finally, we propose an algorithm to mitigate the effects of incorrect labeling in the training set. The methods are analyzed over the ICFHR 2018 competition database, Washington and Parzival. Combining all these techniques, we demonstrate a remarkable reduction of CER (up to 6 percentage points in some cases) in the test set with little complexity overhead.INDEX TERMS connectionist temporal classification (CTC), convolutional neural networks (CNN), data augmentation (DA), deep neural networks (DNN), historical documents, long-short-term-memory (LSTM), offline handwriting text recognition (HTR), outlier detection; transfer learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.