Multiple object tracking (MOT) has proven to be a powerful technique for studying sustained selective attention. However, surprisingly little is known about its underlying neural mechanisms. Previous fMRI investigations have identified several brain areas thought to be involved in MOT, but there were disagreements between the studies, none distinguished between the act of tracking targets and the act of attending targets, and none attempted to determine which of these brain areas interact with each other. Here we address these three issues. First, using more observers and a random effects analysis, we show that some of the previously identified areas may not play a specific role in MOT. Second, we show that the frontal eye fields (FEF), the anterior intraparietal sulcus (AIPS), the superior parietal lobule (SPL), the posterior intraparietal sulcus (PIPS) and the human motion area (MT+) are differentially activated by the act of tracking, as distinguished from the act of attention. Finally, by using an algorithm modified from the computer science literature, we were able to map the interactions between these brain areas.
Models of human decision-making aim to simultaneously explain the similarity, attraction, and compromise effects. However, evidence that people show all three effects within the same paradigm has come from studies in which choices were averaged over participants. This averaging is only justified if those participants show qualitatively similar choice behaviors. To investigate whether this was the case, we repeated two experiments previously run by Trueblood (Psychonomic Bulletin & Review, 19(5), 962-968, 2012) and Berkowitsch, Scheibehenne, and Rieskamp (Journal of Experimental Psychology, 143(3), 1331-1348, 2014). We found that individuals displayed qualitative differences in their choice behavior. In general, people did not simultaneously display all three context effects. Instead, we found a tendency for some people to show either the similarity effect or the compromise effect but not both. More importantly, many individuals showed strong dimensional biases that were much larger than any effects of context. This research highlights the dangers of averaging indiscriminately and the necessity for accounting for individual differences and dimensional biases in decision-making.
Introduction: To evaluate the accuracy of deep convolutional neural networks (DCNNs) for detecting neck of femur (NoF) fractures on radiographs, in comparison with perceptual training in medically-na€ ıve individuals. Methods: This study extends a previous study that conducted perceptual training in medically-na€ ıve individuals for the detection of NoF fractures on a variety of dataset sizes. The same anteroposterior hip radiograph dataset was used to train two DCNNs (AlexNet and GoogLeNet) to detect NoF fractures. For direct comparison with perceptual training results, deep learning was completed across a variety of dataset sizes (200, 320 and 640 images) with images split into training (80%) and validation (20%). An additional 160 images were used as the final test set. Multiple pre-processing and augmentation techniques were utilised. Results: AlexNet and GoogLeNet DCNNs NoF fracture detection accuracy increased with larger training dataset sizes and mildly with augmentation. Accuracy increased from 81.9% and 88.1% to 89.4% and 94.4% for AlexNet and GoogLeNet respectively. Similarly, the test accuracy for the perceptual training in top-performing medically-na€ ıve individuals increased from 87.6% to 90.5% when trained on 640 images compared with 200 images. Conclusions: Single detection tasks in radiology are commonly used in DCNN research with their results often used to make broader claims about machine learning being able to perform as well as subspecialty radiologists. This study suggests that as impressive as recognising fractures is for a DCNN, similar learning can be achieved by top-performing medically-na€ ıve humans with less than 1 hour of perceptual training.
Objectives To conduct a pilot study to evaluate the predictive value of the Montreal Cognitive Assessment test (MoCA) and a brief test of multiple object tracking (MOT) relative to other tests of cognition and attention in identifying at-risk older drivers, and to determine which combination of tests provided the best overall prediction. Methods Forty-seven currently-licensed drivers (58 to 95 years), primarily from a clinical driving evaluation program, participated. Their performance was measured on: (1) a screening test battery, comprising MoCA, MOT, MiniMental State Examination (MMSE), Trail-Making Test, visual acuity, contrast sensitivity, and Useful Field of View (UFOV); and (2) a standardized road test. Results Eighteen participants were rated at-risk on the road test. UFOV subtest 2 was the best single predictor with an area under the curve (AUC) of .84. Neither MoCA nor MOT was a better predictor of the at-risk outcome than either MMSE or UFOV, respectively. The best four-test combination (MMSE, UFOV subtest 2, visual acuity and contrast sensitivity) was able to identify at-risk drivers with 95% specificity and 80% sensitivity (.91 AUC). Conclusions Although the best four-test combination was much better than a single test in identifying at-risk drivers, there is still much work to do in this field to establish test batteries that have both high sensitivity and specificity.
Observers are poor at reporting the identities of objects that they have successfully tracked (Pylyshyn, Visual Cognition, 11, 801-822, 2004; Scholl & Pylyshyn, Cognitive Psychology, 38, 259-290, 1999). Consequently, it has been claimed that objects are tracked in a manner that does not encode their identities (Pylyshyn, 2004). Here, we present evidence that disputes this claim. In a series of experiments, we show that attempting to track the identities of objects can decrease an observer's ability to track the objects' locations. This indicates that the mechanisms that track, respectively, the locations and identities of objects draw upon a common resource. Furthermore, we show that this common resource can be voluntarily distributed between the two mechanisms. This is clear evidence that the location- and identity-tracking mechanisms are not entirely dissociable.
Diagnosing certain fractures in conventional radiographs can be a difficult task, usually taking years to master. Typically, students are trained ad-hoc, in a primarily-rule based fashion. Our study investigated whether students can more rapidly learn to diagnose proximal neck of femur fractures via perceptual training, without having to learn an explicit set of rules. One hundred and thirty-nine students with no prior medical or radiology training were shown a sequence of plain film X-ray images of the right hip and for each image were asked to indicate whether a fracture was present. Students were told if they were correct and the location of any fracture, if present. No other feedback was given. The more able students achieved the same level of accuracy as board certified radiologists at identifying hip fractures in less than an hour of training. Surprisingly, perceptual learning was reduced when the training set was constructed to over-represent the types of images participants found more difficult to categorise. Conversely, repeating training images did not reduce post-training performance relative to showing an equivalent number of unique images. Perceptual training is an effective way of helping novices learn to identify hip fractures in X-ray images and should supplement the current education programme for students.
There is much debate regarding the types of information observers use to track moving objects. Howe and Holcombe (Journal of Vision 12(13): 1-10, 2012) recently reported evidence that observers employ extrapolation while tracking. However, their study is potentially confounded because it did not control for eye movements. As eye movements can aid extrapolation, it is unclear whether extrapolation can still occur in multiple object tracking (MOT) when eye movements are eliminated. In the current study, we addressed this question using an eye tracker to ensure that fixation was always maintained on a central fixation point while observers performed a tracking task. In the predictable condition, objects always travelled along linear paths. In the unpredictable condition, objects randomly changed direction every 300-600 ms. If observers employ extrapolation, we would expect performance to be greater in the former condition than in the latter condition. Our results showed that observers did indeed perform better in the predictable condition than in the unpredictable condition, at least when tracking just two objects (Experiments 1, 3, and 4). Extrapolation occurred less when tracking loads increased or when the objects moved more slowly (Experiment 2).
Is it easier to track objects that you have seen repeatedly? We compared repeated blocks, where identities were the same from trial to trial, to unrepeated blocks, where identities varied. People were better in tracking objects that they saw repeatedly. We tested four hypotheses to explain this repetition benefit. First, perhaps the repeated condition benefits from consistent mapping of identities to target and distractor roles. However, the repetition benefit persisted even when both the repeated and the unrepeated conditions used consistent mapping. Second, repetition might improve the ability to recover targets that have been lost, or swapped with distractors. However, we observed a larger repetition benefit for color-color conjunctions, which do not benefit from such error recovery processes, than for unique features, which do. Furthermore, a repetition benefit was observed even in the absence of distractors. Third, perhaps repetition frees up resources by reducing memory load. However, increasing memory load by masking identities during the motion phase reduced the repetition benefit. The fourth hypothesis is that repetition facilitates identity tracking, which in turn improves location tracking. This hypothesis is consistent with all our results. Thus, our data suggest that identity and location tracking share a common resource.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.