Cassava is the third largest source of carbohydrates for human food in the world but is vulnerable to virus diseases, which threaten to destabilize food security in sub-Saharan Africa. Novel methods of cassava disease detection are needed to support improved control which will prevent this crisis. Image recognition offers both a cost effective and scalable technology for disease detection. New deep learning models offer an avenue for this technology to be easily deployed on mobile devices. Using a dataset of cassava disease images taken in the field in Tanzania, we applied transfer learning to train a deep convolutional neural network to identify three diseases and two types of pest damage (or lack thereof). The best trained model accuracies were 98% for brown leaf spot (BLS), 96% for red mite damage (RMD), 95% for green mite damage (GMD), 98% for cassava brown streak disease (CBSD), and 96% for cassava mosaic disease (CMD). The best model achieved an overall accuracy of 93% for data not used in the training process. Our results show that the transfer learning approach for image recognition of field images offers a fast, affordable, and easily deployable strategy for digital plant disease detection.
Assessment dataset-were combined with a stack of over 200 environmental datasets and gSSURGO polygon maps to generate complete coverage gridded predictions at 100-m spatial resolution of six soil properties (percentage of organic C, total N, bulk density, pH, and percentage of sand and clay) and two US soil taxonomic classes (291 great groups [GGs] and 78 modified particle size classes [mPSCs]) for the conterminous United States. Models were built using parallelized random forest and gradient boosting algorithms as implemented in the ranger and xgboost packages for R. Soil property predictions were generated at seven standard soil depths (0, 5, 15, 30, 60, 100, and 200 cm). Prediction probability maps for US soil taxonomic classifications were also generated. Cross validation results indicated an out-of-bag classification accuracy of 60% for GGs and 66% for mPSCs; for soil properties, RMSE for leave-location-out cross-validation was 0.74 (R 2 = 0.68), 17.8 wt% (R 2 = 0.57), 12 wt% (R 2 = 0.46), 3.63 wt% (R 2 = 0.41), 0.2 g cm −3 (R 2 = 0.42), and 0.27 wt% (R 2 = 0.39) for pH, percent sand and clay, weight percentage of organic C, bulk density, and weight percentage of total N, respectively. Nine independent validation datasets were used to assess prediction accuracies for soil class models, and results ranged between 24 and 58% and between 24 and 93% for GG and mPSC prediction accuracies, respectively. Although mapping accuracies were variable and likely lower than gSSURGO in some areas, this modeling approach can enable easier integration of soil information with spatially explicit models compared with multicomponent map units.
Convolutional neural network (CNN) models have the potential to improve plant disease phenotyping where the standard approach is visual diagnostics requiring specialized training. In scenarios where a CNN is deployed on mobile devices, models are presented with new challenges due to lighting and orientation. It is essential for model assessment to be conducted in real world conditions if such models are to be reliably integrated with computer vision products for plant disease phenotyping. We train a CNN object detection model to identify foliar symptoms of diseases in cassava (Manihot esculenta Crantz). We then deploy the model in a mobile app and test its performance on mobile images and video of 720 diseased leaflets in an agricultural field in Tanzania. Within each disease category we test two levels of severity of symptoms-mild and pronounced, to assess the model performance for early detection of symptoms. In both severities we see a decrease in performance for real world images and video as measured with the F-1 score. The F-1 score dropped by 32% for pronounced symptoms in real world images (the closest data to the training data) due to a decrease in model recall. If the potential of mobile CNN models are to be realized our data suggest it is crucial to consider tuning recall in order to achieve the desired performance in real world settings. In addition, the varied performance related to different input data (image or video) is an important consideration for design in real world applications.
Ignition quality tester (IQT) derived cetane numbers (DCNs) of binary blends of each individual alcohol (1-, 2-, iso-, and t-butanol and ethanol) with each second component (n-heptane and a real distillate fuel) have been measured to explore the autoignition behavior of these mixtures. This study pays particular attention to the effect of physical property variation within and among families of mixtures on their apparent reactivities. The relative reactivities of these blends are dominated by chemical kinetics, while blend-specific physical properties affect relative ignitability only slightly. The results firmly support DCN measurement as a means of characterizing mechanistic ignition chemistry behaviors among fuels and their blends. Surprisingly, t-butanol, which has been shown in other studies to be the least reactive pure C 4 alcohol, shows the least suppression of reactivity when blended with heptane or diesel fuel for most mixture fractions. This result is related to the lack of easily abstractable H atoms in t-butanol, relative to the other alcohols investigated, an explanation hitherto applied only to pure component butanol reactivity. Measured DCN values are shown to fit well a one-parameter cetane number blending model. Predictions from this model show that up to several percent of the considered alcohols can be blended into diesel-like fuels without significant deterioration of the cetane number.
Core Ideas A soil bulk density pedotransfer function for the conterminous United States. Across a climate gradient, PTF provided bulk densities to estimate SOC stocks. PTF model and the resulting bulk density estimates are available for use under an Open Data license. This paper describes a method to develop a soil bulk density pedotransfer function (PTF) using the Random Forest machine‐Learning algorithm with soil and environmental data for the conterminous United States. Complete data from 45,818 horizons were extracted from the National Cooperative Soil Survey (NCSS) soil characterization database and used to calibrate and validate the PTF. Environmental data included surficial materials and hierarchical ecosystem land classifications. The results of a five‐fold cross‐validation showed that the average root mean squared prediction error (RMSPE) was 0.13 g cm–3, and the mean prediction error (MPE) was –0.001 g cm–3. An illustrative example of a weight‐to‐area conversion using the PTF was done with soil organic carbon (SOC) stocks. The fitted PTF can be used to fill in data gaps for volumetric assessments, as was done for SOC stock calculations. It could also be used with other international soil datasets if environmental data for surficial materials and ecoregion province can be determined and related to categories present in the United States. The PTF model and the resulting bulk density estimates are available for use under an Open Data license and can be accessed from Harvard Dataverse.
Nuru is a deep learning object detection model for diagnosing plant diseases and pests developed as a public good by PlantVillage (Penn State University), FAO, IITA, CIMMYT, and others. It provides a simple, inexpensive and robust means of conducting in-field diagnosis without requiring an internet connection. Diagnostic tools that do not require the internet are critical for rural settings, especially in Africa where internet penetration is very low. An investigation was conducted in East Africa to evaluate the effectiveness of Nuru as a diagnostic tool by comparing the ability of Nuru, cassava experts (researchers trained on cassava pests and diseases), agricultural extension officers and farmers to correctly identify symptoms of cassava mosaic disease (CMD), cassava brown streak disease (CBSD) and the damage caused by cassava green mites (CGM). The diagnosis capability of Nuru and that of the assessed individuals was determined by inspecting cassava plants and by using the cassava symptom recognition assessment tool (CaSRAT) to score images of cassava leaves, based on the symptoms present. Nuru could diagnose symptoms of cassava diseases at a higher accuracy (65% in 2020) than the agricultural extension agents (40–58%) and farmers (18–31%). Nuru’s accuracy in diagnosing cassava disease and pest symptoms, in the field, was enhanced significantly by increasing the number of leaves assessed to six leaves per plant (74–88%). Two weeks of Nuru practical use provided a slight increase in the diagnostic skill of extension workers, suggesting that a longer duration of field experience with Nuru might result in significant improvements. Overall, these findings suggest that Nuru can be an effective tool for in-field diagnosis of cassava diseases and has the potential to be a quick and cost-effective means of disseminating knowledge from researchers to agricultural extension agents and farmers, particularly on the identification of disease symptoms and their management practices.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.