IntuiScript is an innovative project aiming at the development of a digital workbook providing feedback during the handwriting learning process for children from three to seven years old. In this context, the paper presents a method to analyse handwriting quality that responds to the expectations of the IntuiScript educational scenario: on-line and real time feedback for children, an automatic detection of children mistakes guiding the pedagogical progression, and a precise analysis of children writing saved to help teacher to understand children writing skills. The presented method introduces a multi-criteria architecture to analyse handwriting quality based on three different aspects: shape, order and direction. The validation of the proposed approach is done on a realistic dataset collected in preschools and primary schools with 952 children. Results show a positive feedback of children and teachers about the use of tactile digital devices, and a significant improvement of the performances of the multi-criteria architecture compared to the previous analyser. The ground truth has been annotated by experts with different levels of confidence. Specific evaluation metrics are introduced to deal with confidence annotations.
This paper presents an open tool for standardizing the evaluation process of the layout analysis task of document images at pixel level. We introduce a new evaluation tool that is both available as a standalone Java application and as a RESTful web service. This evaluation tool is free and open-source in order to be a common tool that anyone can use and contribute to. It aims at providing as many metrics as possible to investigate layout analysis predictions, and also provides an easy way of visualizing the results. This tool evaluates document segmentation at pixel level, and supports multi-labeled pixel ground truth. Finally, this tool has been successfully used for the ICDAR 2017 competition on Layout Analysis for Challenging Medieval Manuscripts.
We propose a novel approach towards adversarial attacks on neural networks (NN), focusing on tampering the data used for training instead of generating attacks on trained models. Our network-agnostic method creates a backdoor during training which can be exploited at test time to force a neural network to exhibit abnormal behaviour. We demonstrate on two widely used datasets (CIFAR-10 and SVHN) that a universal modification of just one pixel per image for all the images of a class in the training set is enough to corrupt the training procedure of several state-of-the-art deep neural networks causing the networks to misclassify any images to which the modification is applied. Our aim is to bring to the attention of the machine learning community, the possibility that even learning-based methods that are personally trained on public datasets can be subject to attacks by a skillful adversary.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.