In the DECODE project, data were collected from 3,114 surveys filled by symptomatic patients RT-qPCR tested for SARS-CoV-2 in a single university centre in March-September 2020. The population demonstrated balanced sex and age with 759 SARS-CoV-2( +) patients. The most discriminative symptoms in SARS-CoV-2( +) patients at early infection stage were loss of taste/smell (OR = 3.33, p < 0.0001), body temperature above 38℃ (OR = 1.67, p < 0.0001), muscle aches (OR = 1.30, p = 0.0242), headache (OR = 1.27, p = 0.0405), cough (OR = 1.26, p = 0.0477). Dyspnea was more often reported among SARS-CoV-2(-) (OR = 0.55, p < 0.0001). Cough and dyspnea were 3.5 times more frequent among SARS-CoV-2(-) (OR = 0.28, p < 0.0001). Co-occurrence of cough, muscle aches, headache, loss of taste/smell (OR = 4.72, p = 0.0015) appeared significant, although co-occurrence of two symptoms only, cough and loss of smell or taste, means OR = 2.49 (p < 0.0001). Temperature > 38℃ with cough was most frequent in men (20%), while loss of taste/smell with cough in women (17%). For younger people, taste/smell impairment is sufficient to characterise infection, whereas in older patients co-occurrence of fever and cough is necessary. The presented study objectifies the single symptoms and interactions significance in COVID-19 diagnoses and demonstrates diverse symptomatology in patient groups.
Tracking and action-recognition algorithms are currently widely used in video surveillance, monitoring urban activities and in many other areas. Their development highly relies on benchmarking scenarios, which enable reliable evaluations/improvements of their efficiencies. Presently, benchmarking methods for tracking and action-recognition algorithms rely on manual annotation of video databases, prone to human errors, limited in size and time-consuming. Here, using gained experiences, an alternative benchmarking solution is presented, which employs methods and tools obtained from the computer-game domain to create simulated video data with automatic annotations. Presented approach highly outperforms existing solutions in the size of the data and variety of annotations possible to create. With proposed system, a potential user can generate a sequence of random images involving different times of day, weather conditions, and scenes for use in tracking evaluation. In the design of the proposed tool, the concept of crowd simulation is used and developed. The system is validated by comparisons to existing methods.
The automatic detection of violent actions in public places through video analysis is difficult because the employed Artificial Intelligence-based techniques often suffer from generalization problems. Indeed, these algorithms hinge on large quantities of annotated data and usually experience a drastic drop in performance when used in scenarios never seen during the supervised learning phase. In this paper, we introduce and publicly release the Bus Violence benchmark, the first large-scale collection of video clips for violence detection on public transport, where some actors simulated violent actions inside a moving bus in changing conditions, such as the background or light. Moreover, we conduct a performance analysis of several state-of-the-art video violence detectors pre-trained with general violence detection databases on this newly established use case. The achieved moderate performances reveal the difficulties in generalizing from these popular methods, indicating the need to have this new collection of labeled data, beneficial for specializing them in this new scenario.
Data scarcity has become one of the main obstacles to developing supervised models based on Artificial Intelligence in Computer Vision. Indeed, Deep Learning-based models systematically struggle when applied in new scenarios never seen during training and may not be adequately tested in non-ordinary yet crucial real-world situations. This paper presents and publicly releases CrowdSim2, a new synthetic collection of images suitable for people and vehicle detection gathered from a simulator based on the Unity graphical engine. It consists of thousands of images gathered from various synthetic scenarios resembling the real world, where we varied some factors of interest, such as the weather conditions and the number of objects in the scenes. The labels are automatically collected and consist of bounding boxes that precisely localize objects belonging to the two object classes, leaving out humans from the annotation pipeline. We exploited this new benchmark as a testing ground for some state-of-the-art detectors, showing that our simulated scenarios can be a valuable tool for measuring their performances in a controlled environment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.