Nowadays, there is tremendous growth in the Internet of Things (IoT) applications in our everyday lives. The proliferation of smart devices, sensors technology, and the Internet makes it possible to communicate between the digital and physical world seamlessly for distributed data collection, communication, and processing of several applications dynamically. However, it is a challenging task to monitor and track objects in real-time due to the distinct characteristics of the IoT system, e.g., scalability, mobility, and resource-limited nature of the devices. In this paper, we address the significant issue of IoT object tracking in real time. We propose a system called ‘TrackInk’ to demonstrate our idea. TrackInk will be capable of pointing toward and taking pictures of visible satellites in the night sky, including but not limited to the International Space Station (ISS) or the moon. Data will be collected from sensors to determine the system’s geographical location along with its 3D orientation, allowing for the system to be moved. Additionally, TrackInk will communicate with and send data to ThingSpeak for further cloud-based systems and data analysis. Our proposed system is lightweight, highly scalable, and performs efficiently in a resource-limited environment. We discuss a detailed system’s architecture and show the performance results using a real-world hardware-based experimental setup.
The devices that can read Electroencephalography (EEG) signals have been widely used for Brain-Computer Interfaces (BCIs). Popularity in the field of BCIs has increased in recent years with the development of several consumer-grade EEG devices that can detect human cognitive states in real-time and deliver feedback to enhance human performance. Several previous studies have been conducted to understand the fundamentals and essential aspects of EEG in BCIs. However, the significant issue of how consumer-grade EEG devices can be used to control mechatronic systems effectively has been given less attention. In this paper, we have designed and implemented an EEG BCI system using the OpenBCI Cyton headset and a user interface running a game to explore the concept of streamlining the interaction between humans and mechatronic systems with a BCI EEG-mechatronic system interface. Big Multimodal Social Data (BMSD) analytics can be applied to the high-frequency and high-volume EEG data, allowing us to explore aspects of data acquisition, data processing, and data validation and evaluate the Quality of Experience (QoE) of our system. We employ real-world participants to play a game to gather training data that was later put into multiple machine learning models, including a linear discriminant analysis (LDA), k-nearest neighbours (KNN), and a convolutional neural network (CNN). After training the machine learning models, a validation phase of the experiment took place where participants tried to play the same game but without direct control, utilising the outputs of the machine learning models to determine how the game moved. We find that a CNN trained to the specific user was able to control the game and performed with the highest activation accuracy from the machine learning models tested, along with the highest user rated QoE, which gives us significant insight for future implementation with a mechatronic system.
The devices that can read Electroencephalography (EEG) signals have been widely used for Brain-Computer Interfaces (BCIs). Popularity in the field of BCIs has increased in recent years with the development of several consumergrade EEG devices that can detect human cognitive states in real-time and deliver feedback to enhance human performance. Several studies are conducted to understand the fundamentals and essential aspects of EEG in BCIs. However, the significant issue of how can consumer-grade EEG devices be used to control mechatronic systems effectively has been given less attention. In this paper, we have designed and implemented an EEG BCI system using the Open-BCI Cyton headset and a user interface running a game. We employ real-world participants to play a game to gather training data that was later put into multiple machine learning models, including a linear discriminant analysis (LDA), k-nearest neighbours (KNN), and a convolutional neural network (CNN). After training the machine learning models, a validation phase of the experiment took place where participants tried to play the same game but without direct control, utilising the outputs of the machine learning models to determine how the game moved. We find that a CNN trained to the specific user playing the game performed with the highest activation accuracy from the machine learning models tested, allowing for future implementation with a mechatronic system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.