SUMMARYIncreased phenotyping accuracy and throughput are necessary to improve our understanding of quantitative variation and to be able to deconstruct complex traits such as those involved in growth responses to the environment. Still, only a few facilities are known to handle individual plants of small stature for non-destructive, real-time phenotype acquisition from plants grown in precisely adjusted and variable experimental conditions. Here, we describe Phenoscope, a high-throughput phenotyping platform that has the unique feature of continuously rotating 735 individual pots over a table. It automatically adjusts watering and is equipped with a zenithal imaging system to monitor rosette size and expansion rate during the vegetative stage, with automatic image analysis allowing manual correction. When applied to Arabidopsis thaliana, we show that rotating the pots strongly reduced micro-environmental disparity: heterogeneity in evaporation was cut by a factor of 2.5 and the number of replicates needed to detect a specific mild genotypic effect was reduced by a factor of 3. In addition, by controlling a large proportion of the micro-environmental variance, other tangible sources of variance become noticeable. Overall, Phenoscope makes it possible to perform large-scale experiments that would not be possible or reproducible by hand. When applied to a typical quantitative trait loci (QTL) mapping experiment, we show that mapping power is more limited by genetic complexity than phenotyping accuracy. This will help to draw a more general picture as to how genetic diversity shapes phenotypic variation.
During the last few years, image by content retrieval is the aim of many studies. A lot of systems were introduced in order to achieve image indexation. One of the most common method is to compute a segmentation and to extract different parameters from regions. However, this segmentation step is based on low level knowledge, without taking into account simple perceptual aspects of images, like the blur. When a photographer decides to focus only on some objects in a scene, he certainly considers very differently these objects from the rest of the scene. It does not represent the same amount of information. The blurry regions may generally be considered as the context and not as the information container by image retrieval tools. Our idea is then to focus the comparison between images by restricting our study only on the non blurry regions, using then these meta data. Our aim is to introduce different features and a machine learning approach in order to reach blur identification in scene images.
Much information can be extracted from geotagged photographies posted on online image databases like Flickr or Panoramio. Recent works have demonstrated that some treatment of this data can provide a good estimation of tourism behavior. Tourism represents today and for several years an important factor in the regional economy. Understanding and analyzing the tourist behavior corresponds to a significant demand from institutions. For this purpose, many studies have been launched. Many specialists of tourism need to separate tourists according to their place of residence. In the context of two projects supported by territorial collectivities, this paper introduces a new paradigm to estimate photographer's country of residence. Each user will be described by his photographic timeline. This timeline allows to compute intermediate properties: travel time at a destination, number of trips, number of visited countries... This generation of symbolic data is essential and allows to synthesize the richness of the timeline in front of the recognition task to achieve. Classification algorithms will then be introduced, some sets with experts of science of tourism, others using data clustering and supervised learning techniques. We compared these methods for two distinct questions: firstly we classify photographers into two categories (French/non-French for example); secondly we find the country of residence of each user. It demonstrates that, using learning algorithms or expertdefined rules permits to identify users residence efficiently. We are thus able to meet the request of experts in tourism and refine even more the analysis of tourist behavior.
This chapter summarizes the state-of-the-art color techniques used in the emerging field of image watermarking. It is now well understood that a color approach is required when it comes to deal with security, steganography and watermarking applications of multimedia contents. Indeed, consumers and business expectations are focused on the protection of their contents, which are here color images and videos. In the past few years, several gray-level image watermarking schemes have been proposed but their application to color image is often inadequate since they usually work with the luminance or with individual color channel. Unfortunately, color cannot be considered as a simple RGB decomposition and all of its intrinsic information must be integrated in the watermarking process. Therefore, it is the chapter objective to present, first, the major difficulties associated with the treatment of color images, and second, the state-of-the-art methods used in the field of color image watermarking.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.