2015
DOI: 10.1117/12.2189248
|View full text |Cite
|
Sign up to set email alerts
|

Implementation and evaluation of an interictal spike detector

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
20
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
7

Relationship

3
4

Authors

Journals

citations
Cited by 14 publications
(25 citation statements)
references
References 15 publications
0
20
0
Order By: Relevance
“…Human annotations of IEDs in intracranial EEG exhibit significant interrater variability . To avoid potential inconsistencies in human review while marking a large amount of data, we chose to use an automated IED detector after validating it relative to multiple clinicians . Figure B shows an example IED detection.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…Human annotations of IEDs in intracranial EEG exhibit significant interrater variability . To avoid potential inconsistencies in human review while marking a large amount of data, we chose to use an automated IED detector after validating it relative to multiple clinicians . Figure B shows an example IED detection.…”
Section: Methodsmentioning
confidence: 99%
“…Recently, template‐matching spike detection has been used more routinely for research . The original algorithm was modified based on observations from a dataset of hippocampal recordings and then validated on two independent datasets, each annotated by three neurophysiologists . Furthermore, the detector performed with a receiver operating characteristic (ROC) curve similar to that of another published detector .…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…We utilized a template‐based IED detector that was validated and performed comparably to clinicians at Dartmouth‐Hitchcock (DH) and other published detectors 2,18–20 . In terms of inter‐reviewer variability, this template‐matching detector showed similar performance (average agreement with neurophysiologists = 0.54, range = 0.43–0.66) to three neurophysiologists (average agreement between neurophysiologists = 0.43, range = 0.27–0.70) and higher performance than another published detector (average agreement with neurophysiologists = 0.27, range = 0.17–0.40) 18 . Thus, we elected to use our automated detector due to the time‐intensive nature of human annotations of IEDs and the notable inter‐rater variability between human reviewers 21 .…”
Section: Methodsmentioning
confidence: 71%