A heterogeneous sensor network offers an extremely effective means of communicating with the international community, first responders, and humanitarian assistance agencies as long as affected populations have access to the Internet during disasters. When communication networks fail in an emergency situation, a challenge emerges when emergency services try to communicate with each other. In such situations, field data can be collected from nearby sensors deploying a wireless sensor network and a delay-tolerant network over the region to monitor. When data has to be sent to the operations center without any telecommunication infrastructure available, HF, satellite, and high-altitude platforms are the unique options, being HF with Near Vertical Incidence Skywave the most cost-effective and easy-to-install solution. Sensed data in disaster situations could serve a wide range of interests and needs (scientific, technical, and operational information for decision-makers). The proposed monitorization architecture addresses the communication with the public during emergencies using movable and deployable resource unit technologies for sensing, exchanging, and distributing information for humanitarian organizations. The challenge is to show how sensed data and information management contribute to a more effective and timely response to improve the quality of life of the affected populations. Our proposal was tested under real emergency conditions in Europe and Antarctica.
The cognitive connection between the senses of touch and vision is probably the best-known case of multimodality. Recent discoveries suggest that the mapping between both senses is learned rather than innate. This evidence opens the door to a dynamic multimodality that allows individuals to adaptively develop within their environment. By mimicking this aspect of human learning, we propose a new multimodal mechanism that allows artificial cognitive systems (ACS) to quickly adapt to unforeseen perceptual anomalies generated by the environment or by the system itself. In this context, visual recognition systems have advanced remarkably in recent years thanks to the creation of large-scale datasets together with the advent of deep learning algorithms. However, this has not been the case for the haptic modality, where the lack of two-handed dexterous datasets has limited the ability of learning systems to process the tactile information of human object exploration. This data imbalance hinders the creation of synchronized datasets that would enable the development of multimodality in ACS during object exploration. In this work, we use a multimodal dataset recently generated from tactile sensors placed on a collection of objects that capture haptic data from human manipulation, together with the corresponding visual counterpart. Using this data, we create a multimodal learning transfer mechanism capable of both detecting sudden and permanent anomalies in the visual channel and maintaining visual object recognition performance by retraining the visual mode for a few minutes using haptic information. Our proposal for perceptual awareness and self-adaptation is of noteworthy relevance as can be applied by any system that satisfies two very generic conditions: it can classify each mode independently and is provided with a synchronized multimodal data set.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.