No abstract
Our understanding of how experts integrate prior situation-specific information (i.e., contextual priors) with emergent visual information when performing dynamic and temporally constrained tasks is limited. We use a soccer-based anticipation task to examine the ability of expert and novice players to integrate prior information about an opponent's action tendencies with unfolding environmental information such as opponent kinematics. We recorded gaze behaviours and ongoing expectations during task performance. Moreover, we assessed their final anticipatory judgements and perceived levels of cognitive effort invested. Explicit contextual priors biased the allocation of visual attention and shaped ongoing expectations in experts, but not in novices. When the final action was congruent with the most likely action given the opponent's action tendencies, the contextual priors enhanced the final judgements for both groups. For incongruent trials, the explicit priors had a negative impact on the final judgements of novices, but not experts. We interpret the data using a Bayesian framework to provide novel insights into how contextual priors and dynamic environmental information are combined when making decisions under time pressure. Moreover, we provide evidence that this integration is governed by the temporal relevance of the information at hand as well as the ability to infer this relevance.
There is a growing demand for the uptake of modern artificial intelligence technologies within healthcare systems. Many of these technologies exploit historical patient health data to build powerful predictive models that can be used to improve diagnosis and understanding of disease. However, there are many issues concerning patient privacy that need to be accounted for in order to enable this data to be better harnessed by all sectors. One approach that could offer a method of circumventing privacy issues is the creation of realistic synthetic data sets that capture as many of the complexities of the original data set (distributions, non-linear relationships, and noise) but that does not actually include any real patient data. While previous research has explored models for generating synthetic data sets, here we explore the integration of resampling, probabilistic graphical modelling, latent variable identification, and outlier analysis for producing realistic synthetic data based on UK primary care patient data. In particular, we focus on handling missingness, complex interactions between variables, and the resulting sensitivity analysis statistics from machine learning classifiers, while quantifying the risks of patient re-identification from synthetic datapoints. We show that, through our approach of integrating outlier analysis with graphical modelling and resampling, we can achieve synthetic data sets that are not significantly different from original ground truth data in terms of feature distributions, feature dependencies, and sensitivity analysis statistics when inferring machine learning classifiers. What is more, the risk of generating synthetic data that is identical or very similar to real patients is shown to be low.
In a detailed review of cystic hepatobiliary neoplasms, we identified a subset of 50 cases in which tumors were characterized by the presence of a mesenchymal cell layer interposed between an inner epithelial lining and an outer connective tissue layer. We have recently seen three such patients, making a total of 53 patients reported in the English literature. All of the patients were female, 44 of whom, with an average age of 41 years, had benign tumors. The average age of the remaining nine patients was 57 years and these patients had malignant tumors. In seven patients, the malignancy arose from the epithelial layer, but in two patients sarcomatous changes were identified in the mesenchymal tissues. The older age of the patients with malignant tumors with adequate serial biopsies in two cases supported the thesis that malignant transformation may occur in the benign tumors. Moreover the location of the tumor in one of our patients in whom the resected tumor was associated with anomalous right hepatic ducts and portal veins supported the theory that these tumors develop embryologically from nests of primitive hepatobiliary endodermal and mesodermal cells. Although surgical treatment was performed in all patients, 25% of the patients with benign hepatobiliary cystadenoma with mesenchymal stroma (CMS), and 33% of the patients with malignant CMS had tumor recurrence after primary resection. Ninety per cent of these patients had an incomplete resection at the time of their initial operations. Forty-four per cent of the patients with malignant CMS died after a mean follow-up of 17 months. We conclude that CMS (Edmonson's tumor) occurs uniquely in young female patients, develops from nests of primitive embryonal cells, has the potential for malignant transformation, and should be completely resected at primary operation to avoid recurrence.
Learning the structure of Bayesian networks from data is known to be a computationally challenging, NP-hard problem. The literature has long investigated how to perform structure learning from data containing large numbers of variables, following a general interest in high-dimensional applications ("small n, large p") in systems biology and genetics.More recently, data sets with large numbers of observations (the so-called "big data") have become increasingly common; and these data sets are not necessarily high-dimensional, sometimes having only a few tens of variables depending on the application. We revisit the computational complexity of Bayesian network structure learning in this setting, showing that the common choice of measuring it with the number of estimated local distributions leads to unrealistic time complexity estimates for the most common class of scorebased algorithms, greedy search. We then derive more accurate expressions under common distributional assumptions. These expressions suggest that the speed of Bayesian network learning can be improved by taking advantage of the availability of closed form estimators for local distributions with few parents. Furthermore, we find that using predictive instead of insample goodness-of-fit scores improves speed; and we confirm that is improves the accuracy of network recon-
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.