AbstractObjectiveRate of learning is often cited as a deterrent in the use of endoscopic ear surgery. This study investigated the learning curves of novice surgeons performing simulated ear surgery using either an endoscope or a microscope.MethodsA prospective multi-site clinical research study was conducted. Seventy-two medical students were randomly allocated to the endoscope or microscope group, and performed 10 myringotomy and ventilation tube insertions. Trial times were used to produce learning curves. From these, slope (learning rate) and asymptote (optimal proficiency) were ascertained.ResultsThere was no significant difference between the learning curves (p = 0.41). The learning rate value was 68.62 for the microscope group and 78.71 for the endoscope group. The optimal proficiency (seconds) was 32.83 for the microscope group and 27.87 for the endoscope group.ConclusionThe absence of a significant difference shows that the learning rates of each technique are statistically indistinguishable. This suggests that surgeons are not justified when citing ‘steep learning curve’ in arguments against the use of endoscopes in middle-ear surgery.
In this paper, we develop a sparse method for unsupervised dimension reduction for data from an exponential-family distribution. Our idea extends previous work on Generalised Principal Component Analysis by adding L 1 and SCAD penalties to introduce sparsity. We demonstrate the significance and advantages of our method with synthetic and real data examples. We focus on the application to text data which is high-dimensional and non-Gaussian by nature and discuss the potential advantages of our methodology in achieving dimension reduction.
Dimension reduction tools offer a popular approach to analysis of high-dimensional big data. In this paper, we propose an algorithm for sparse Principal Component Analysis for non-Gaussian data. Since our interest for the algorithm stems from applications in text data analysis we focus on the Poisson distribution which has been used extensively in analysing text data. In addition to sparsity our algorithm is able to effectively determine the desired number of principal components in the model (order determination). The good performance of our proposal is demonstrated with both synthetic and real data examples.
Li, Artemiou and Li (2011) presented the novel idea of using Support Vector Machines to perform sufficient dimension reduction. In this work, we investigate the potential improvement in recovering the dimension reduction subspace when one changes the Support Vector Machines algorithm to treat imbalance based on several proposals in the machine learning literature. We find out that in most situations, treating the imbalance nature of the slices will help improve the estimation. Our results are verified through simulation and real data applications.
This is a brief overview of the methodology around exponential family PCA. We revisit classic PCA methodology, and we focus on exponential family PCA due to its applicability on a number of distributions and hence a wide variety of problems. We discuss the applicability of these methods to text data analysis due to the high-dimensional and sparse nature of these data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.