2007
DOI: 10.1016/j.cmpb.2007.06.004
|View full text |Cite
|
Sign up to set email alerts
|

Fast collision detection based on nose augmentation virtual surgery

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
12
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(12 citation statements)
references
References 5 publications
0
12
0
Order By: Relevance
“…The most frequently used algorithms were supervised machine learning (n = 11, 25%) 29 31 , 33 , 34 , 47 , 53 , 54 , 58 , 59 , 65 and artificial neural networks (n = 11, 25%). 36 , 40 , 42 , 43 , 50 , 57 , 60 64 Other algorithms used were convolutional neural networks (n = 8, 19%), 25 , 27 , 28 , 37 , 38 , 39 , 49 , 56 unsupervised machine learning (n = 4, 9%), 41 , 45 , 55 , 68 natural language processing (n = 4, 9%), 34 , 48 , 67 , 68 generative adversarial networks (n = 2, 5%), 17 , 26 computer vision (n = 2, 5%), 32 , 52 and combinations of models (combo; n = 2, 5%). 43 , 44 Input features were typically comprised of raw and preprocessed variables, such as subject characteristics (age, lapse time, comorbidities, vital signs, and laboratory values, anatomical and wound measurements, tissue reflectance spectrum), clinical images (facial photography, CT images, angiography, photoplethysmography, dermatoscopy, 3D cephalograms), surgical factors (surgical approach, intraoperative interactions with equipment), and synthetic or experimentally derived metrics (external muscle stimulation pulse widths, frequently asked questions).…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…The most frequently used algorithms were supervised machine learning (n = 11, 25%) 29 31 , 33 , 34 , 47 , 53 , 54 , 58 , 59 , 65 and artificial neural networks (n = 11, 25%). 36 , 40 , 42 , 43 , 50 , 57 , 60 64 Other algorithms used were convolutional neural networks (n = 8, 19%), 25 , 27 , 28 , 37 , 38 , 39 , 49 , 56 unsupervised machine learning (n = 4, 9%), 41 , 45 , 55 , 68 natural language processing (n = 4, 9%), 34 , 48 , 67 , 68 generative adversarial networks (n = 2, 5%), 17 , 26 computer vision (n = 2, 5%), 32 , 52 and combinations of models (combo; n = 2, 5%). 43 , 44 Input features were typically comprised of raw and preprocessed variables, such as subject characteristics (age, lapse time, comorbidities, vital signs, and laboratory values, anatomical and wound measurements, tissue reflectance spectrum), clinical images (facial photography, CT images, angiography, photoplethysmography, dermatoscopy, 3D cephalograms), surgical factors (surgical approach, intraoperative interactions with equipment), and synthetic or experimentally derived metrics (external muscle stimulation pulse widths, frequently asked questions).…”
Section: Resultsmentioning
confidence: 99%
“…All but eight studies (n = 36, 82%) provided disclosures. A study aim was clearly stated in all papers, and all but two articles 67 , 68 reported the data source used (n = 42, 95%). Most papers reported equivalent comparison groups (n = 35, 76%) though fewer studies compared AI to an adequate control group (ie, a gold standard diagnostic test or therapeutic intervention) (n = 28, 64%) or a contemporary, nonhistorical, ground truth (n = 29, 66%).…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Xie [19] combines hierarchical bounding sphere with GPU acceleration to simulate collision detection among rigid bodies for rhinoplasty. Shen [20] adopts the mixed bounding volume hierarchy tree to detect the potential collision object sets quickly, and then uses the streaming pattern algorithm for accurate collision detection.…”
mentioning
confidence: 99%