The human visual system is able to extract an object from its surrounding using a number of cues. These include foreground/background gradients in disparity, motion, texture, colour, and luminance. We have investigated normal subjects' ability to detect objects defined by either motion, texture, or luminance gradients. The effects of manipulating cue density and cue foreground/background gradient on both detection and recognition accuracy were also investigated. The results demonstrate a simple additive relationship between cue density and cue gradient across forms defined by motion, luminance, and texture. The results are interpreted as evidence for the notion that form parsing is achieved via a similar algorithm across anatomically distinct processing streams.
Comerota et al 1 reported significantly higher blood flow velocities in women across stenoses of 55% to 75% of the internal carotid artery (ICA) than in men. Given the law of conservation of mass, it is difficult to understand how a given percentage of luminal narrowing can cause a different increase in velocity across genders. Comerota et al used a small sample size (Ͻ40 in each group), neglected to use an appropriate correction for multiple comparisons, such as the Bonferroni correction, and did not report statistics for age (a potential confound) in the groups where the significant differences were found.The John Hunter Hospital Cardiovascular Unit database was queried for all carotid duplex studies since the inception of the database in 1991. The analysis included 6,165 studies comprising 3,287 men and 2,878 women. Student's t test for independent samples was used to determine if there was an effect of gender on carotid artery blood flow velocities in both ICA and common carotid artery (CCA) on the left and right. Because of the number of comparisons being made, a Bonferroni correction was used, and a result of P Ͻ .004 was considered significant. With this sample size, assuming a statistical power of 0.8, a mean difference of 0.06 m/s in the ICA would be detectable at the level of P Ͻ 0.004. Comerota et al found a mean difference of 0.49 m/s.In our dataset, women had significantly lower CCA velocities than men (left side: t ϭ 5.327, df[6163], P Ͻ .0001; right side: t ϭ 4.646, df[6179], P Ͻ .0001). Furthermore, females were an average of 2 years older than their male counterparts (t ϭ -6.583, df[6228], P Ͻ .0001). Age was significantly correlated with velocities of the CCA (r ϭ -.295, P Ͻ .0001) and the ICA (r ϭ -.048, P Ͻ .0001). This confound may account for the 0.03-m/s difference found in both the right and left CCA velocities between men and women.Comerota et al found a gender difference at 50% and 60% stenosis. The arteries where a 50% to 79% stenosis was found on duplex were extracted from the group as a whole and compared across genders for both the right and left sides. This gave a sample size of 202 men and 174 women with right-sided stenoses and 382 men and 336 women with left-sided stenoses. No significant differences in velocities were found for either the left or the right common or internal carotid arteries in this group.A 2 analysis, performed using gender and category of stenosis, found that men are significantly more likely to have moderate and severe disease than women (right side: 2 ϭ 12.134, P Ͻ .001; left side: 2 ϭ 13.235, P Ͻ .001.In conclusion, there is no evidence of a gender difference in blood flow velocities across stenoses in our relatively large dataset. The findings of Comerota et al may be confounded by an effect of age differences.
Classifying free-text from historical databases into research-compatible formats is a barrier for clinicians undertaking audit and research projects. The aim of this study was to (a) develop interactive active machine-learning model training methodology using readily available software that was (b) easily adaptable to a wide range of natural language databases and allowed customised researcher-defined categories, and then (c) evaluate the accuracy and speed of this model for classifying free text from two unique and unrelated clinical notes into coded data. A user interface for medical experts to train and evaluate the algorithm was created. Data requiring coding in the form of two independent databases of free-text clinical notes, each of unique natural language structure. Medical experts defined categories relevant to research projects and performed ‘label-train-evaluate’ loops on the training data set. A separate dataset was used for validation, with the medical experts blinded to the label given by the algorithm. The first dataset was 32,034 death certificate records from Northern Territory Births Deaths and Marriages, which were coded into 3 categories: haemorrhagic stroke, ischaemic stroke or no stroke. The second dataset was 12,039 recorded episodes of aeromedical retrieval from two prehospital and retrieval services in Northern Territory, Australia, which were coded into 5 categories: medical, surgical, trauma, obstetric or psychiatric. For the first dataset, macro-accuracy of the algorithm was 94.7%. For the second dataset, macro-accuracy was 92.4%. The time taken to develop and train the algorithm was 124 minutes for the death certificate coding, and 144 minutes for the aeromedical retrieval coding. This machine-learning training method was able to classify free-text clinical notes quickly and accurately from two different health datasets into categories of relevance to clinicians undertaking health service research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.