Narcissus flowers are used as cut flowers and to obtain high quality essential oils for the perfume industry. As a winter crop in the Mediterranean area, it flowers at temperatures ranging between 10 and 15°C during the day and 3–10°C during the night. Here we tested the impact of different light and temperature conditions on scent quality during post-harvest. These two types of thermoperiod and photoperiod. We also used constant darkness and constant temperatures. We found that under conditions of 12:12 Light Dark and 15-5°C, Narcissus emitted monoterpenes and phenylpropanoids. Increasing the temperature to 20°-10°C in a 12:12 LD cycle caused the loss of cinnamyl acetate and emission of indole. Under constant dark, there was a loss of scent complexity. Constant temperatures of 20°C caused a decrease of scent complexity that was more dramatic at 5°C, when the total number of compounds emitted decreased from thirteen to six. Distance analysis confirmed that 20°C constant temperature causes the most divergent scent profile. We found a set of four volatiles, benzyl acetate, eucalyptol, linalool, and ocimene that display a robust production under differing environmental conditions, while others were consistently dependent on light or thermoperiod. Scent emission changed significantly during the day and between different light and temperature treatments. Under a light:dark cycle and 15-5°C the maximum was detected during the light phase but this peak shifted toward night under 20-10°C. Moreover, under constant darkness the peak occurred at midnight and under constant temperature, at the end of night. Using Machine Learning we found that indole was the volatile with a highest ranking of discrimination followed by D-limonene. Our results indicate that light and temperature regimes play a critical role in scent quality. The richest scent profile is obtained by keeping flowers at 15°-5°C thermoperiod and a 12:12 Light Dark photoperiod.
Current predefined architectures for deep learning are computationally very heavy and use tens of millions of parameters. Thus, computational costs may be prohibitive for many experimental or technological setups. We developed an ad hoc architecture for the classification of multispectral images using deep learning techniques. The architecture, called 3DeepM, is composed of 3D filter banks especially designed for the extraction of spatial-spectral features in multichannel images. The new architecture has been tested on a sample of 12210 multispectral images of seedless table grape varieties: Autumn Royal, Crimson Seedless, Itum4, Itum5 and Itum9. 3DeepM was able to classify 100% of the images and obtained the best overall results in terms of accuracy, number of classes, number of parameters and training time compared to similar work. In addition, this paper presents a flexible and reconfigurable computer vision system designed for the acquisition of multispectral images in the range of 400 nm to 1000 nm. The vision system enabled the creation of the first dataset consisting of 12210 37-channel multispectral images (12 VIS + 25 IR) of five seedless table grape varieties that have been used to validate the 3DeepM architecture. Compared to predefined classification architectures such as AlexNet, ResNet or ad hoc architectures with a very high number of parameters, 3DeepM shows the best classification performance despite using 130-fold fewer parameters than the architecture to which it was compared. 3DeepM can be used in a multitude of applications that use multispectral images, such as remote sensing or medical diagnosis. In addition, the small number of parameters of 3DeepM make it ideal for application in online classification systems aboard autonomous robots or unmanned vehicles.
Background The combination of computer vision devices such as multispectral cameras coupled with artificial intelligence has provided a major leap forward in image-based analysis of biological processes. Supervised artificial intelligence algorithms require large ground truth image datasets for model training, which allows to validate or refute research hypotheses and to carry out comparisons between models. However, public datasets of images are scarce and ground truth images are surprisingly few considering the numbers required for training algorithms. Results We created a dataset of 1,283 multidimensional arrays, using berries from five different grape varieties. Each array has 37 images of wavelengths between 488.38 and 952.76 nm obtained from single berries. Coupled to each multispectral image, we added a dataset with measurements including, weight, anthocyanin content, and Brix index for each independent grape. Thus, the images have paired measures, creating a ground truth dataset. We tested the dataset with 2 neural network algorithms: multilayer perceptron (MLP) and 3-dimensional convolutional neural network (3D-CNN). A perfect (100% accuracy) classification model was fit with either the MLP or 3D-CNN algorithms. Conclusions This is the first public dataset of grape ground truth multispectral images. Associated with each multispectral image, there are measures of the weight, anthocyanins, and Brix index. The dataset should be useful to develop deep learning algorithms for classification, dimensionality reduction, regression, and prediction analysis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.