Thermography, with high-resolution cameras, is being re-investigated as a possible breast cancer screening imaging modality, as it does not have the harmful radiation effects of mammography. This paper focuses on automatic extraction of medically interpretable non-vascular thermal features. We design these features to differentiate malignancy from different non-malignancy conditions, including hormone sensitive tissues and certain benign conditions, which have an increased thermal response. These features increase the specificity for breast cancer screening, which had been a long known problem in thermographic screening, while retaining high sensitivity. These features are also agnostic to different cameras and resolutions (up to an extent). On a dataset of around 78 subjects with cancer and 187 subjects without cancer, that have some benign diseases and conditions with thermal responses, we are able to get around 99% specificity while having 100% sensitivity. This indicates a potential break-through in thermographic screening for breast cancer. This shows promise for undertaking a comparison to mammography with larger numbers of subjects with more data variations.
With the recent interest in virtual reality and augmented reality, there is a newfound demand for displays that can provide high resolution with a wide field of view (FOV). However, such displays incur significantly higher costs for rendering the larger number of pixels. This poses the challenge of rendering realistic real-time images that have a wide FOV and high resolution using limited computing resources. The human visual system does not need every pixel to be rendered at a uniformly high quality. Foveated rendering methods provide perceptually high-quality images while reducing computational workload and are becoming a crucial component for large-scale rendering. In this paper, we present key motivations, research directions, and challenges for leveraging the limitations of the human visual system as they relate to foveated rendering. We provide a taxonomy to compare and contrast various foveated techniques based on key factors. We also review aliasing artifacts arising due to foveation methods and discuss several approaches that attempt to mitigate such effects. Finally, we present several open problems and possible future research directions that can further reduce computational costs while generating perceptually high-quality renderings.
Figure 1: We propose a new method for view interpolation through implicit neural representations (INR) of images. After each image is randomly assigned a code vector 𝑧, the codes are then jointly trained with the neural network to produce the RGB color given coordinate (𝑥, 𝑦). With standard training, the INR fails to decode coherent images from new codes interpolated by two trained codes, but our method enables smooth transition between two known viewpoints. Contrary to common methods for view interpolation, our method does not use 3D structure, camera poses, or pixel correspondence during training.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.