Over the past two decades, hundreds of protocols have been developed for diversified applications of WSN corresponding to different layers in the communication stack. Among these, Media Access Control (MAC) layer protocols are of great interest due to providing possibility of optimizing performance parameters. Despite availability of a large number of survey articles, there remains a gap for a tutorial that offers guidelines about the development process of MAC protocol. In this paper, we present a detailed tutorial for developing a MAC protocol starting from the stage of research gap identification and ending at the performance evaluation. We described the journey of development and implementation of a novel asynchronous MAC protocol ADP-MAC (Adaptive and Dynamic Polling MAC) as a case study. ADP-MAC was developed by deploying a novel concept of channel polling interval distributions, and was compared against Synchronized Channel Polling- MAC (SCP-MAC) and lightweight Traffic Auto-Adaptation based MAC (T-AAD). Finally, we proposed major milestones of protocol development along with recommendations about publishing the research.
The human capability of detecting, understanding, and contextualizing objects in the real world by machines has always been a dream for computer scientists. Along with other important and pending challenges in the field of computer vision, image captioning with context and content is an important research problem. In our research, we attempted to develop a human-like storytelling system that can caption images with the perspective of content, context, syntax, and knowledge. Our methodology is a combination of Capsule Networks for image encoding, Knowledge Graph for content and context awareness, and Transformer Neural Networks for decoding. During feature extraction, spatial, geometrical, and orientational details are extracted using Capsule Networks. To equip our model with content, context, and semantics, the corpus is passed through the Knowledge Graph. The decoding phase is a combination of Knowledge Graph and Transformer Neural Network for knowledge-driven captioning. Dynamic multi-headed attention in the decoder is used for memory optimization. Our model is trained over MSCOCO. The results provide good content and context understanding along with B4: 18.23, M: 19.2, R: 41.1, and C: 54.19. The primary outcome of our research is generating autonomous story-type captions for real-world images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.