Current rates of species loss triggered numerous attempts to protect and conserve biodiversity. Species conservation, however, requires species identification skills, a competence obtained through intensive training and experience. Field researchers, land managers, educators, civil servants, and the interested public would greatly benefit from accessible, up-to-date tools automating the process of species identification. Currently, relevant technologies, such as digital cameras, mobile devices, and remote access to databases, are ubiquitously available, accompanied by significant advances in image processing and pattern recognition. The idea of automated species identification is approaching reality. We review the technical status quo on computer vision approaches for plant species identification, highlight the main research challenges to overcome in providing applicable tools, and conclude with a discussion of open and future research thrusts.
The investigation of degradation of seven distinct sets (with a number of individual cells of n $ 12) of state of the art organic photovoltaic devices prepared by leading research laboratories with a combination of imaging methods is reported. All devices have been shipped to and degraded at Risø DTU up to 1830 hours in accordance with established ISOS-3 protocols under defined illumination conditions. Imaging of device function at different stages of degradation was performed by laser-beam induced current (LBIC) scanning; luminescence imaging, specifically photoluminescence (PLI) and electroluminescence (ELI); as well as by lock-in thermography (LIT). Each of the imaging techniques exhibits its specific advantages with respect to sensing certain degradation features, which will be compared and discussed here in detail. As a consequence, a combination of several imaging techniques yields very conclusive information about the degradation processes controlling device function. The large variety of device architectures in turn enables valuable progress in the proper interpretation of imaging results-hence revealing the benefits of this large scale cooperation in making a step forward in the understanding of organic solar cell aging and its interpretation by state-of-the-art imaging methods.
A large number of flexible polymer solar modules comprising 16 serially connected individual cells was prepared at the experimental workshop at Risø DTU. The photoactive layer was prepared from several varieties of P3HT (Merck, Plextronics, BASF and Risø DTU) and two varieties of ZnO (nanoparticulate, thin film) were employed as electron transport layers. The devices were all tested at Risø DTU and the functional devices were subjected to an inter-laboratory study involving the performance and the stability of modules over time in the dark, under light soaking and outdoor conditions. 24 laboratories from 10 countries and across four different continents were involved in the studies. The reported results allowed for analysis of the variability between different groups in performing lifetime studies as well as performing a comparison of different testing procedures. These studies constitute the first steps toward establishing standard procedures for an OPV lifetime charac terization
We apply luminescence imaging as tool for the nondestructive visualization of degradation processes within bulk heterojunction polymer solar cells. The imaging technique is based on luminescence detection with a highly sensitive silicon charge-coupled-device camera and is able to visualize with time advancing degradation patterns of polymer solar cells. The devices investigated have been aged under defined conditions and were characterized periodically with current-voltage (I-V) sweeps. This allows determining the time evolution of the photovoltaic parameters and—in combination with the luminescence images—understanding differences in the observed degradation behavior. The versatile usability of the method is demonstrated in a correlation between local reduction of lateral luminescence and a fast decrease of the short-circuit current (Isc) due to the loss of active area. Differences in the degradation of photovoltaic parameters under varied aging conditions are discussed.
Humans’ decision making process often relies on utilizing visual information from different views or perspectives. However, in machine-learning-based image classification we typically infer an object’s class from just a single image showing an object. Especially for challenging classification problems, the visual information conveyed by a single image may be insufficient for an accurate decision. We propose a classification scheme that relies on fusing visual information captured through images depicting the same object from multiple perspectives. Convolutional neural networks are used to extract and encode visual features from the multiple views and we propose strategies for fusing these information. More specifically, we investigate the following three strategies: (1) fusing convolutional feature maps at differing network depths; (2) fusion of bottleneck latent representations prior to classification; and (3) score fusion. We systematically evaluate these strategies on three datasets from different domains. Our findings emphasize the benefit of integrating information fusion into the network rather than performing it by post-processing of classification scores. Furthermore, we demonstrate through a case study that already trained networks can be easily extended by the best fusion strategy, outperforming other approaches by large margin.
BackgroundModern plant taxonomy reflects phylogenetic relationships among taxa based on proposed morphological and genetic similarities. However, taxonomical relation is not necessarily reflected by close overall resemblance, but rather by commonality of very specific morphological characters or similarity on the molecular level. It is an open research question to which extent phylogenetic relations within higher taxonomic levels such as genera and families are reflected by shared visual characters of the constituting species. As a consequence, it is even more questionable whether the taxonomy of plants at these levels can be identified from images using machine learning techniques.ResultsWhereas previous studies on automated plant identification from images focused on the species level, we investigated classification at higher taxonomic levels such as genera and families. We used images of 1000 plant species that are representative for the flora of Western Europe. We tested how accurate a visual representation of genera and families can be learned from images of their species in order to identify the taxonomy of species included in and excluded from learning. Using natural images with random content, roughly 500 images per species are required for accurate classification. The classification accuracy for 1000 species amounts to 82.2% and increases to 85.9% and 88.4% on genus and family level. Classifying species excluded from training, the accuracy significantly reduces to 38.3% and 38.7% on genus and family level. Excluded species of well represented genera and families can be classified with 67.8% and 52.8% accuracy.ConclusionOur results show that shared visual characters are indeed present at higher taxonomic levels. Most dominantly they are preserved in flowers and leaves, and enable state-of-the-art classification algorithms to learn accurate visual representations of plant genera and families. Given a sufficient amount and composition of training data, we show that this allows for high classification accuracy increasing with the taxonomic level and even facilitating the taxonomic identification of species excluded from the training process.Electronic supplementary materialThe online version of this article (10.1186/s12859-018-2474-x) contains supplementary material, which is available to authorized users.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.