Red fleshed apples (Malus × domestica Borkh.) differ in colour intensity between cultivars, seasons and sites. The objective of this study was to develop a procedure for predicting anthocyanin content from digital images of flesh discs. Flesh cylinders of uniform colour were excised, scanned and their colours determined in the R, G and B and the L*a*b* colour spaces. Anthocyanin content was also quantified chemically. A calibration line was constructed to predict anthocyanin content of flesh discs of varying colour from a scan or a photograph in the studio or outdoors. Anthocyanin concentration was linearly related to the logarithms of G, B and L*. From these relationships, the anthocyanin content of a flesh disc was predicted, pixel by pixel. Colour corrections were applied using a reference colour chart included in all images. The Finlayson algorithm was most effective for correcting the G parameter obtained by a flatbed scanner. For variable imaging methods (scanning or photography), the Vandermonde algorithm for correcting the L* parameter and the Finlayson algorithm for correcting the G parameter were most effective in predicting anthocyanin content. The procedure allows accurate prediction of anthocyanin content of red fleshed apples from simple colour scans or photographs.
The diversity of facial shapes and motions among persons is one of the greatest challenges for automatic analysis of facial expressions. In this paper, we propose a feature describing expression intensity over time, while being invariant to person and the type of performed expression. Our feature is a weighted combination of the dynamics of multiple points adapted to the overall expression trajectory. We evaluate our method on several tasks all related to temporal analysis of facial expression. The proposed feature is compared to a state-of-the-art method for expression intensity estimation, which it outperforms. We use our proposed feature to temporally align multiple sequences of recorded 3D facial expressions. Furthermore, we show how our feature can be used to reveal person-specific differences in performances of facial expressions. Additionally, we apply our feature to identify the local changes in face video sequences based on action unit labels. For all the experiments our feature proves to be robust against noise and outliers, making it applicable to a variety of applications for analysis of facial movements.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.