The analysis of microscopy images has always been an important yet time consuming process in materials science. Convolutional Neural Networks (CNNs) have been very successfully used for a number of tasks, such as image segmentation. However, training a CNN requires a large amount of hand annotated data, which can be a problem for material science data. We present a procedure to generate synthetic data based on ad hoc parametric data modelling for enhancing generalization of trained neural network models. Especially for situations where it is not possible to gather a lot of data, such an approach is beneficial and may enable to train a neural network reasonably. Furthermore, we show that targeted data generation by adaptively sampling the parameter space of the generative models gives superior results compared to generating random data points.
Researchers have combined XML3D, which provides declarative, interactive 3D scene descriptions based on HTML5, with Xflow, a language for declarative, high-performance data processing. The result lets Web developers combine a 3D scene graph with data flows for dynamic meshes, animations, image processing, and postprocessing.
Figure 1: Left image: an AR application developed with XML3D and Xflow. The displayed teapot jumps from one visible marker to the other. Center image: another AR application. This time, animated characters are displayed on top of the markers (character models by Valve Software). Right image: several image processing operators implemented with Xflow. (from the top left:
AbstractRecently, modern Web browser became capable of supporting powerful, interactive 3D graphics both via the low-level, imperative API of WebGL as well as via a high-level, declarative approach like XML3D. The obvious next step (particularly with respect to mobile platforms) is to combine video from the real world with matched virtual content -Augmented or Mixed Reality (AR/MR). However, AR requires extensive image or video processing, feature detection and tracking, and applying the results to 3D renderingall of which is hard to implement in a Web context.In this paper we present a novel approach that encapsulates lowlevel image-processing and AR operations into re-usable high-level XML3D/Xflow components that are part of the HTML-5 DOM. A Web developer can then easily and flexibly arrange these components into (possibly complex) processing flow-graphs without having to worry about the internal computations and the structure of these modules. Our extended Xflow implementation automatically optimizes, schedules, and synchronizes the processing of the flow graph(s) in the context of real-time 3D rendering. Finally, we provide an integration model that greatly simplifies building AR applications for the browser.We demonstrate this with several simple AR and image processing applications using a polyfill implementation working in all modern browsers and evaluate the performance. Finally, we show how the declarative framework can be optimized with respect to performance and usability using parallelization with Web Workers and RiverTrail.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.