Background:In forensic science the process of proving authenticity of audio recording plays an important role. In recent times, Forensic experts mostly receives digital recording for authentication as compared to analog recording. A digitally altered audio signal, leaves no visual indications of being tampered, and it will be indistinguishable from an original audio signal. Objective: To highlight the significance of latency feature of mobile phone handsets in forensic science via comparing input audio latency feature of Samsung and Motorola mobile phone in two audio formats. Methods: In this work two wellestablished and most used brands of mobile phones were considered for comparison: SAMSUNG and MOTOROLA. In the present paper, the digital audio samples have been recorded using 20 mobile phones of various models from two different makes i.e. SAMSUNG and MOTOROLA, in two audio formats i.e. WAV and 3GP. Audio samples were then analysed using Adobe Audition 3.0 software for the input audio latency feature of mobile phones and compared. Findings: Input audio latency value of digital audio recordings can be helpful in forensic identification of make and model of source mobile phone. Novelty: A new technique in digital forensics, to classify the given audio samples on the basis of input audio latency feature and identifying the make of source mobile handsets.
Objectives: Clinician reported outcomes (ClinRO's) are often crucial primary endpoints in therapeutic areas where objective clinical assessments are unavailable or infeasible and are routinely used in dermatology clinical trials. Due to the nature and presentation of dermatological symptoms, there is a need for standardized training and assessment in an attempt to reduce variability of subjective ratings and increase data quality. This study assessed the impact of standardising clinician (rater) training on a dermatology outcome measure. Methods: Clinicians were to visually assess severity of dermatological conditions in human examples using a Clinician-Reported Severity Scale ranging from 0 (none), 1 (almost none), 2 (mild), 3 (moderate) to 4 (Severe). Clinician ratings were obtained on example photographic images and on human models. Ratings were recorded for pre-training and post-training and compared to the consensus of an expert panel. Results: 94 clinicians' scores were compared to those from an expert panel through weighted kappa analyses. Pretraining kappa coefficients ranged from 0.33 to 1.00 with a mean of 0.87 and standard deviation of 0.10. Post-training kappa coefficients ranged from 0.90 to 0.98 with a mean of 0.96 and standard deviation of 0.02. A paired t-test comparing pre-and post-training kappa coefficients found a significant increase: M D = 0.09, 95% CI [0.07, 0.11]. t(93) = 8.54, p , .001. Conclusions: Dermatology clinical trials often rely on subjective ClinRO's as common endpoint measures. This research supports that standardization of rater training can help greatly improve data quality of ClinRO's in dermatology clinical trials.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.