Objective: To understand better the public perception and comprehension of medical technology such as artificial intelligence (AI) and robotic surgery. In addition to this, to identify sensitivity to their use to ensure acceptability and quality of counseling. Subjects and Methods: A survey was conducted on a convenience sample of visitors to the MN Minnesota State Fair (n = 264). Participants were randomized to receive one of two similar surveys. In the first, a diagnosis was made by a physician and in the second by an AI application to compare confidence in human and computerbased diagnosis. Results: The median age of participants was 45 (interquartile range 28-59), 58% were female (n = 154) vs 42% male (n = 110), 69% had completed at least a bachelor's degree, 88% were Caucasian (n = 233) vs 12% ethnic minorities (n = 31) and were from 12 states, mostly from the Upper Midwest. Participants had nearly equal trust in AI vs physician diagnoses. However, they were significantly more likely to trust an AI diagnosis of cancer over a doctor's diagnosis when responding to the version of the survey that suggested that an AI could make medical diagnoses (p = 9.32e-06). Though 55% of respondents (n = 145) reported that they were uncomfortable with automated robotic surgery, the majority of the individuals surveyed (88%) mistakenly believed that partially autonomous surgery was already happening. Almost all (94%, n = 249) stated that they would be willing to pay for a review of medical imaging by an AI if available. Conclusion: Most participants express confidence in AI providing medical diagnoses, sometimes even over human physicians. Participants generally express concern with surgical AI, but they mistakenly believe that it is already being performed. As AI applications increase in medical practice, health care providers should be cognizant of the potential amount of misinformation and sensitivity that patients have to how such technology is represented.
626 Background: The 2019 Kidney and Kidney Tumor Segmentation challenge (KiTS19) was an international competition held in conjunction with the 2019 International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) and sought to stimulate progress on this automatic segmentation frontier. Growing rates of kidney tumor incidence led to research into the use of artificial inteligence (AI) to radiographically differentiate and objectively characterize these tumors. Automated segmentation using AI objectively quantifies complexity and aggression of renal tumors to better differentiate and describe the tumors for improved treatment decision making. Methods: A training set of over 31,000 CT images from 210 patients with kidney tumors was publicly released with corresponding semantic segmentation masks. 106 teams from five continents used this data to develop automated deep learning systems to predict the true segmentation masks on a test set of an additional 13,500 CT images in 90 patients for which the corresponding ground truth segmentations were kept private. These predictions were scored and ranked according to their average Sørensen-Dice coefficient between kidney and tumor across the 90 test cases. Results: The winning team achieved a Dice of 0.974 for kidney and 0.851 for tumor, approaching the human inter-annotator performance on kidney (0.983) but falling short on tumor (0.923). This challenge has now entered an “open leaderboard” phase where it serves as a challenging benchmark in 3D semantic segmentation. Conclusions: Results of the KiTS19 challenge show deep learning methods are fully capable of reliable segmentation of kidneys and kidney tumors. The KiTS19 challenge attracted a high number of submissions and serves as an important and challenging benchmark in 3D segmentation. The publicly available data will further propel the use of automated 3D segmentation analysis. Fully segmented kidneys and tumors allow for automated calculation of all types of nephrometry, tumor textural variation and discovery of new predictive features important for personalized medicine and accurate prediction of patient relevant outcomes.
There is a large body of literature linking anatomic and geometric characteristics of kidney tumors to perioperative and oncologic outcomes. Semantic segmentation of these tumors and their host kidneys is a promising tool for quantitatively characterizing these lesions, but its adoption is limited due to the manual effort required to produce high-quality 3D segmentations of these structures. Recently, methods based on deep learning have shown excellent results in automatic 3D segmentation, but they require large datasets for training, and there remains little consensus on which methods perform best. The 2019 Kidney and Kidney Tumor Segmentation challenge (KiTS19) was a competition held in conjunction with the 2019 International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) which sought to address these issues and stimulate progress on this automatic segmentation problem.A training set of 210 cross sectional CT images with kidney tumors was publicly released with corresponding semantic segmentation masks. 106 teams from five continents used this data to develop automated systems to predict the true segmentation masks on a test set of 90 CT images for which the corresponding ground truth segmentations were kept private. These predictions were scored and ranked according to their average Sørensen-Dice coefficient between the kidney and tumor across all 90 cases. The winning team achieved a Dice of 0.974 for kidney and 0.851 for tumor, approaching the inter-annotator performance on kidney (0.983) but falling short on tumor (0.923). This challenge has now entered an "open leaderboard" phase where it serves as a challenging benchmark in 3D semantic segmentation.
Purpose: Clinicians rely on imaging features to calculate complexity of renal masses based on validated scoring systems. These scoring methods are labor-intensive and are subjected to interobserver variability. Artificial intelligence has been increasingly utilized by the medical community to solve such issues. However, developing reliable algorithms is usually time-consuming and costly. We created an international community-driven competition (KiTS19) to develop and identify the best system for automatic segmentation of kidneys and kidney tumors in contrast CT and report the results.Methods: A training and test set of CT scans that was manually annotated by trained individuals were generated from consecutive patients undergoing renal surgery for whom demographic, clinical and outcome data were available. The KiTS19 Challenge was a machine learning competition hosted on grand-challenge.org in conjunction with an international conference. Teams were given 3 months to develop their algorithm using a full-annotated training set of images and an unannotated test set was released for 2 weeks from which average Sørensen-Dice coefficient between kidney and tumor regions were calculated across all 90 test cases.Results: There were 100 valid submissions that were based on deep neural networks but there were differences in pre-processing strategies, architectural details, and training procedures. The winning team scored a 0.974 kidney Dice and a 0.851 tumor Dice resulting in 0.912 composite score. Automatic segmentation of the kidney by the participating teams performed comparably to expert manual segmentation but was less reliable when segmenting the tumor.Conclusion: Rapid advancement in automated semantic segmentation of kidney lesions is possible with relatively high accuracy when the data is released publicly, and participation is incentivized. We hope that our findings will encourage further research that would enable the potential of adopting AI into the medical field.
Purpose: We quantified patient record documentation of sacral neuromodulation (SNM) threshold testing and programming parameters at our institution to identify opportunities to improve therapy outcomes and future SNM technologies.Methods: A retrospective review was conducted using 127 records from 40 SNM patients. Records were screened for SNM documentation including qualitative and quantitative data. The qualitative covered indirect references to threshold testing and the quantitative included efficacy descriptions and device programming used by the patient. Findings were categorized by visit type: percutaneous nerve evaluation (PNE), stage 1 (S1), permanent lead implantation, stage 2 (S2) permanent impulse generator implantation, device-related follow-up, or surgical removal.Results: Documentation of threshold testing was more complete during initial implant visits (PNE and S1), less complete for S2 visits, and infrequent for follow-up clinical visits. Surgical motor thresholds were most often referred to using only qualitative comments such as “good response” (88%, 100% for PNE, S1) and less commonly included quantitative values (68%, 84%), locations of response (84%, 83%) or specific contacts used for testing (0%). S2 motor thresholds were less well documented with qualitative, quantitative, and anatomical location outcomes at 70%, 48%, and 36% respectively. Surgical notes did not include specific stimulation parameters or contacts used for tests. Postoperative sensory tests were often only qualitative (80%, 67% for PNE, S1) with quantitative values documented much less frequently (39%, 9%) and typically lacked sensory locations or electrode-specific results. For follow-up visits, <10% included quantitative sensory test outcomes. Few records (<7%) included device program settings recommended for therapy delivery and none included therapy-use logs.Conclusions: While evidence suggests contact and parameter-specific programming can improve SNM therapy outcomes, there is a major gap in the documentation of this data. More detailed testing and documentation could improve therapeutic options for parameter titration and provide design inputs for future technologies.
Objective: To understand better the public perception and comprehension with medical technology such as artificial intelligence and robotic surgery. Additionally, to identify sensitivity to, and comfort with, the use of AI and robotics in medicine a in order to ensure acceptability and quality of counseling and to guide future development. Subjects and Methods: A survey was conducted on a convenience sample of visitors to the Minnesota State Fair (n = 264). The survey investigated participant beliefs on the capabilities of AI and robotics in medicine and their comfort with such technology. Participants were randomized to receive one of two similar surveys. In the first a diagnosis was made by a physician and in the second by an AI application in order to compare confidence in human and computer-based diagnosis. Results: The median age of participants was 45 (IQR 28-59), 58% were female (n=154) vs. 42% male (n=110), 69% had completed at least a bachelor's degree, 88% were Caucasian (n=233) vs. 12% ethnic minorities (n=31) and were from 12 states in the US with most from the Upper Midwest. Participants had nearly equal trust in AI vs. physician diagnoses, however, they were significantly more likely to trust an AI diagnosis of cancer over a doctor's diagnosis when responding to the version of the survey that suggested an AI could make medical diagnosis (p = 9.32e-06). Though 55% of respondents (n=145) reported they were uncomfortable with automated robotic surgery the majority of the individuals surveyed (88%) mistakenly believed that partially autonomous surgery was already being performed. Almost all (94%) stated they would be willing to pay for an AI to review their medical imaging, if available. Conclusion: Most participants express confidence in AI providing medical diagnoses, sometimes even over human physicians. Participants generally expressed concern with surgical AI, but mistakenly believe it is already happening. As AI applications make their way into medical practice, health care providers should be cognizant of patient misconceptions and the sensitivity that patients have to how such technology is represented.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.