Abstract:Sigmoid function and ReLU are commonly used activation functions in neural networks (NN). However, sigmoid function is vulnerable to the vanishing gradient problem, while ReLU has a special vanishing gradient problem that is called dying ReLU problem. Though many studies provided methods to alleviate this problem, there has not been an efficient feasible solution. Hence, we proposed a method replacing the original derivative function with an artificial derivative in a pertinent way. Our method optimized gradie… Show more
“…The selected model implements DNN with hidden LSTM layers ( Figure 7 ). We used the rectified linear activation function (ReLu), since it overcomes the vanishing gradient problems present in RNNs [ 46 , 47 ]. It also allows models to learn faster and perform better.…”
Most industrial workplaces involving robots and other apparatus operate behind the fences to remove defects, hazards, or casualties. Recent advancements in machine learning can enable robots to co-operate with human co-workers while retaining safety, flexibility, and robustness. This article focuses on the computation model, which provides a collaborative environment through intuitive and adaptive human–robot interaction (HRI). In essence, one layer of the model can be expressed as a set of useful information utilized by an intelligent agent. Within this construction, a vision-sensing modality can be broken down into multiple layers. The authors propose a human-skeleton-based trainable model for the recognition of spatiotemporal human worker activity using LSTM networks, which can achieve a training accuracy of 91.365%, based on the InHARD dataset. Together with the training results, results related to aspects of the simulation environment and future improvements of the system are discussed. By combining human worker upper body positions with actions, the perceptual potential of the system is increased, and human–robot collaboration becomes context-aware. Based on the acquired information, the intelligent agent gains the ability to adapt its behavior according to its dynamic and stochastic surroundings.
“…The selected model implements DNN with hidden LSTM layers ( Figure 7 ). We used the rectified linear activation function (ReLu), since it overcomes the vanishing gradient problems present in RNNs [ 46 , 47 ]. It also allows models to learn faster and perform better.…”
Most industrial workplaces involving robots and other apparatus operate behind the fences to remove defects, hazards, or casualties. Recent advancements in machine learning can enable robots to co-operate with human co-workers while retaining safety, flexibility, and robustness. This article focuses on the computation model, which provides a collaborative environment through intuitive and adaptive human–robot interaction (HRI). In essence, one layer of the model can be expressed as a set of useful information utilized by an intelligent agent. Within this construction, a vision-sensing modality can be broken down into multiple layers. The authors propose a human-skeleton-based trainable model for the recognition of spatiotemporal human worker activity using LSTM networks, which can achieve a training accuracy of 91.365%, based on the InHARD dataset. Together with the training results, results related to aspects of the simulation environment and future improvements of the system are discussed. By combining human worker upper body positions with actions, the perceptual potential of the system is increased, and human–robot collaboration becomes context-aware. Based on the acquired information, the intelligent agent gains the ability to adapt its behavior according to its dynamic and stochastic surroundings.
“…The ReLU expedites the training and avoids the vanishing gradient [ 49 ]. The last layer in the network is called the output layer (classification layer), which gives the probability of occurrence of different classes.…”
DNA copy number variation (CNV) is the type of DNA variation which is associated with various human diseases. CNV ranges in size from 1 kilobase to several megabases on a chromosome. Most of the computational research for cancer classification is traditional machine learning based, which relies on handcrafted extraction and selection of features. To the best of our knowledge, the deep learning-based research also uses the step of feature extraction and selection. To understand the difference between multiple human cancers, we developed three end-to-end deep learning models, i.e., DNN (fully connected), CNN (convolution neural network), and RNN (recurrent neural network), to classify six cancer types using the CNV data of 24,174 genes. The strength of an end-to-end deep learning model lies in representation learning (automatic feature extraction). The purpose of proposing more than one model is to find which architecture among them performs better for CNV data. Our best model achieved 92% accuracy with an ROC of 0.99, and we compared the performances of our proposed models with state-of-the-art techniques. Our models have outperformed the state-of-the-art techniques in terms of accuracy, precision, and ROC. In the future, we aim to work on other types of cancers as well.
“…In this network, the earliest layers of the design employ depth-wise separable convolutions to speed up the calculations involved in down sampling the input pictures. In order to increase convergence during training, they also devised a batch normalization layer that may reduce internal covariate shift and address the gradient vanishing problem [31]. ResNet50 is a slimmer iteration of ResNet101, which took first place in the ILSVRC classification challenge.…”
The quality of computer vision systems to detect abnormalities in various medical imaging processes, such as dual-energy X-ray absorptiometry, magnetic resonance imaging (MRI), ultrasonography, and computed tomography, has significantly improved as a result of recent developments in the field of deep learning. There is discussion of current techniques and algorithms for identifying, categorizing, and detecting DFU. On the small datasets, a variety of techniques based on traditional machine learning and image processing are utilized to find the DFU. These literary works have kept their datasets and algorithms private. Therefore, the need for end-to-end automated systems that can identify DFU of all grades and stages is critical. The study's goals were to create new CNN-based automatic segmentation techniques to separate surrounding skin from DFU on full foot images because surrounding skin serves as a critical visual cue for evaluating the progression of DFU as well as to create reliable and portable deep learning techniques for localizing DFU that can be applied to mobile devices for remote monitoring. The second goal was to examine the various diabetic foot diseases in accordance with well-known medical categorization schemes. According to a computer vision viewpoint, the authors looked at the various DFU circumstances including site, infection, neuropathy, bacterial infection, area, and depth. Machine learning techniques have been utilized in this study to identify key DFU situations as ischemia and bacterial infection.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.