The number of microbiome-related studies has notably increased the availability of data on human microbiome composition and function. These studies provide the essential material to deeply explore host-microbiome associations and their relation to the development and progression of various complex diseases. Improved data-analytical tools are needed to exploit all information from these biological datasets, taking into account the peculiarities of microbiome data, i.e., compositional, heterogeneous and sparse nature of these datasets. The possibility of predicting host-phenotypes based on taxonomy-informed feature selection to establish an association between microbiome and predict disease states is beneficial for personalized medicine. In this regard, machine learning (ML) provides new insights into the development of models that can be used to predict outputs, such as classification and prediction in microbiology, infer host phenotypes to predict diseases and use microbial communities to stratify patients by their characterization of state-specific microbial signatures. Here we review the state-of-the-art ML methods and respective software applied in human microbiome studies, performed as part of the COST Action ML4Microbiome activities. This scoping review focuses on the application of ML in microbiome studies related to association and clinical use for diagnostics, prognostics, and therapeutics. Although the data presented here is more related to the bacterial community, many algorithms could be applied in general, regardless of the feature type. This literature and software review covering this broad topic is aligned with the scoping review methodology. The manual identification of data sources has been complemented with: (1) automated publication search through digital libraries of the three major publishers using natural language processing (NLP) Toolkit, and (2) an automated identification of relevant software repositories on GitHub and ranking of the related research papers relying on learning to rank approach.
Visual perception is becoming increasingly important in computer graphics. Research on human visual perception has led to the development of perception driven computer graphics techniques, where knowledge of the human visual system and, in particular, its weaknesses are exploited when rendering and displaying 3D graphics. It is well known that many sensory stimuli, including smell, may influence the amount of cognitive resources available to a viewer to perform a visual task. In this paper we investigate the influence smell effects have on the perception of object quality in a rendered image. We show how we can potentially accelerate the rendering of images by directing the viewer's attention towards the source of a smell and selectively rendering at high quality only the smell emitting objects. Other parts of an image can be rendered at a lower quality without the viewer being aware of this quality difference. By doing this, we can significantly reduce rendering time without any loss in the user's perception of delivered quality.
A major obstacle for real-time rendering of high-fidelity graphics is computational complexity. A key point to consider in the pursuit of “realism in real time” in computer graphics is that the Human Visual System (HVS) is a fundamental part of the rendering pipeline. The human eye is only capable of sensing image detail in a 2 ˆ foveal region, relying on rapid eye movements, or saccades, to jump between points of interest. These points of interest are prioritized based on the saliency of the objects in the scene or the task the user is performing. Such “glimpses” of a scene are then assembled by the HVS into a coherent, but inevitably imperfect, visual perception of the environment. In this process, much detail, that the HVS deems unimportant, may literally go unnoticed. Visual science research has identified that movement in the background of a scene may substantially influence how subjects perceive foreground objects. Furthermore, recent computer graphics work has shown that both fixed viewpoint and dynamic scenes can be selectively rendered without any perceptual loss of quality, in a significantly reduced time, by exploiting knowledge of any high-saliency movement that may be present. A high-saliency movement can be generated in a scene if an otherwise static objects starts moving. In this article, we investigate, through psychophysical experiments, including eye-tracking, the perception of rendering quality in dynamic complex scenes based on the introduction of a moving object in a scene. Two types of object movement are investigated: (i) rotation in place and (ii) rotation combined with translation. These were chosen as the simplest movement types. Future studies may include movement with varied acceleration. The object's geometry and location in the scene are not salient. We then use this information to guide our high-fidelity selective renderer to produce perceptually high-quality images at significantly reduced computation times. We also show how these results can have important implications for virtual environment and computer games applications.
A major obstacle for real-time rendering of high-fidelity graphics is computational complexity. A key point to consider in the pursuit of "realism in real-time" in computer graphics is that the Human Visual System (HVS) is a fundamental part of the rendering pipeline. The human eye is only capable of sensing image detail in a 2 • foveal region, relying on rapid eye movements, or saccades, to jump between points of interest. These points of interest are prioritised based on the saliency of the objects in the scene or the task the user is performing. Such "glimpses" of a scene are then assembled by the HVS into a coherent, but inevitably imperfect, visual perception of the environment. In this process, much detail, that the HVS deems unimportant, may literally go unnoticed.Visual science research has identified that movement in the background of a scene may substantially influence how subjects perceive foreground objects. Furthermore, recent computer graphics research has shown that both static and dynamic scenes can be selectively rendered without any perceptual loss of quality, in a significantly reduced time, by exploiting knowledge of any high saliency movement that may be present. In this paper, we investigate, through detailed psychophysical experiments, including eyetracking, the influence of movement in the background versus the influence of other saliency cues. We use the results to develop an algorithm for generation of a saliency map that encompasses movement in the background. This algorithm is an integral part of the model that is used to reduce the rendering time of high-fidelity graphics by a factor of five.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.