Angle perception is an important middle-level visual process, combining line features to generate an integrated shape percept. Previous studies have proposed two theories of angle perception-a combination of lines and a holistic feature following Weber's law. However, both theories failed to explain the dual-peak fluctuations of the just-noticeable difference (JND) across angle sizes. In this study, we found that the human visual system processes the angle feature in two stages: first, by encoding the orientation of the bounding lines and combining them into an angle feature; and second, by estimating the angle in an orthogonal internal reference frame (IRF). The IRF model fits well with the dual-peak fluctuations of the JND that neither the theory of line combinations nor Weber's law can explain. A statistical image analysis of natural images revealed that the IRF was in alignment with the distribution of the angle features in the natural environment, suggesting that the IRF reflects human prior knowledge of angles in the real world. This study provides a new computational framework for angle discrimination, thereby resolving a long-standing debate on angle perception.
When moving around in the world, the human visual system uses both motion and form information to estimate the direction of self-motion (i.e., heading). However, little is known about cortical areas in charge of this task. This brain-imaging study addressed this question by using visual stimuli consisting of randomly distributed dot pairs oriented toward a locus on a screen (the form-defined focus of expansion [FoE]) but moved away from a different locus (the motion-defined FoE) to simulate observer translation. We first fixed the motion-defined FoE location and shifted the form-defined FoE location. We then made the locations of the motion-and the form-defined FoEs either congruent (at the same location in the display) or incongruent (on the opposite sides of the display). The motion-or the form-defined FoE shift was the same in the two types of stimuli, but the perceived heading direction shifted for the congruent, but not for the incongruent stimuli. Participants (both sexes) made a task-irrelevant (contrast discrimination) judgment during scanning. Searchlight and ROI-based multivoxel pattern analysis revealed that early visual areas V1, V2, and V3 responded to either the motion-or the form-defined FoE shift. After V3, only the dorsal areas V3a and V3B/KO responded to such shifts. Furthermore, area V3B/KO shows a significantly higher decoding accuracy for the congruent than the incongruent stimuli. Our results provide direct evidence showing that area V3B/KO does not simply respond to motion and form cues but integrates these two cues for the perception of heading.
In the current clinical care practice, Gleason grading system is one of the most powerful prognostic predictors for prostate cancer (PCa). The grading system is based on the architectural pattern of cancerous epithelium in histological images. However, the standard procedure of histological examination often involves complicated tissue fixation and staining, which are time-consuming and may delay the diagnosis and surgery. In this study, label-free multiphoton microscopy (MPM) was used to acquire subcellular-resolution images of unstained prostate tissues. Then, a deep learning architecture (U-net) was introduced for epithelium segmentation of prostate tissues in MPM images. The obtained segmentation results were then merged with the original MPM images to train a classification network (AlexNet) for automated Gleason grading. The developed method achieved an overall pixel accuracy of 92.3% with a mean F1 score of 0.839 for epithelium segmentation. By merging the segmentation results with the MPM images, the accuracy of Gleason grading was improved from 72.42% to 81.13% in hold-out test set. Our results suggest that MPM in combination with deep learning holds the potential to be used as a fast and powerful clinical tool for PCa diagnosis.
K E Y W O R D Sdeep learning, epithelium segmentation, Gleason Grading, multiphoton microscopy, prostate cancer
In this paper, According to the topology characteristics of VANET, We proposed a new routing protocol that is suitable for VANET communications within the cluster based on AODV: AODV with predicting node trend (AODV-PNT). There are two major improvements in AODV-PNT: (1) Routing metric improvements and calculate Total Weight of the Route (TWR). (2) Predict node's future TWR and calculate stable threshold W in a bid to choose a suitable relay node. Finally, we simulated AODV-PNT using ns2. The simulation results show that AODV-PNT is able to achieve better routing performances in packet deliver ratio, average end-to-end delay and routing overheads as compared to AODV.
UAVs have been widely used in various applications. Auto coordination of multiple UAVs through AI or mission planning software can provide significant improvements in many applications, including battlefield reconnaissance, topographical mapping, and search and rescue missions. Under such circumstances, the trajectory information is known for a set amount of time, and the system’s performance relies on the network between UAVs and their base. Here, a new protocol is proposed that takes the trajectory of UAVs as a known factor and uses it to improve optimized link state routing (OLSR). In this protocol, Q-learning is adopted to find the best route for the system. Additionally, a packet forwarding arrangement is described that addresses the common problem of deteriorating image quality often faced by UAVs. The simulation results show significant improvements over OLSR and GPSR under a sparsely distributed scenario, with the packet delivery ratio improved by over 30% and over 40 s reduction in the end-to-end delay.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.