This study investigated the antioxidant capacities that included Ferric reducing antioxidant power (FRAP), 1, 1-diphenyl-2-picryl hydrazine (DPPH), ·OH and O(2·)(-)-scavenging abilities, total polyphenols (TP) and total anthocyanins (TA) in pomegranate (Punica granatum L.) juice (PJ) and pomegranate wine (PW). The correlations among them were also analyzed. Both PJ and PW showed significantly high TP and antioxidant capacities, but some differences existed among these cultivars. There was 1596.67 mg/L TP found in sweet PJ, while sour PJ showed the highest titratable acidity of 35.90 g/L and lowest pH value at 2.56. Red PJ was found to have the highest TA (82.26 mg/L) in the 3 cultivars. Sweet PJ showed higher DPPH-scavenging ability and higher FRAP than others. Both PJ and PW exhibited high and relatively stable ·OH-scavenging abilities, in which sour PJ and sour PW had higher O(2·)(-) scavenging capacity than others. Significant positive correlations were observed among TP, DPPH, and FRAP in both PJ and PW. A high correlation between antioxidant capacities and TP indicated that phenolic compounds were major contributors to the high antioxidant activity of PJ and PW.
Attention mechanisms are often used in deep neural networks for distantly supervised relation extraction (DS-RE) to distinguish valid from noisy instances. However, traditional 1-D vector attention models are insufficient for the learning of different contexts in the selection of valid instances to predict the relationship for an entity pair. To alleviate this issue, we propose a novel multi-level structured (2-D matrix) self-attention mechanism for DS-RE in a multi-instance learning (MIL) framework using bidirectional recurrent neural networks. In the proposed method, a structured word-level self-attention mechanism learns a 2-D matrix where each row vector represents a weight distribution for different aspects of an instance regarding two entities. Targeting the MIL issue, the structured sentence-level attention learns a 2-D matrix where each row vector represents a weight distribution on selection of different valid instances. Experiments conducted on two publicly available DS-RE datasets show that the proposed framework with a multi-level structured self-attention mechanism significantly outperform state-of-the-art baselines in terms of PR curves, P@N and F1 measures.
Distance supervision is widely used in relation extraction tasks, particularly when large-scale manual annotations are virtually impossible to conduct. Although Distantly Supervised Relation Extraction (DSRE) benefits from automatic labelling, it suffers from serious mislabelling issues, i.e. some or all of the instances for an entity pair (head and tail entities) do not express the labelled relation. In this paper, we propose a novel model that employs a collaborative curriculum learning framework to reduce the effects of mislabelled data. Specifically, we firstly propose an internal self-attention mechanism between the convolution operations in convolutional neural networks (CNNs) to learn a better sentence representation from the noisy inputs. Then we define two sentence selection models as two relation extractors in order to collaboratively learn and regularise each other under a curriculum scheme to alleviate noisy effects, where the curriculum could be constructed by conflicts or small loss. Finally, experiments are conducted on a widely-used public dataset and the results indicate that the proposed model significantly outperforms baselines including the state-of-the-art in terms of P@N and PR curve metrics, thus evidencing its capability of reducing noisy effects for DSRE.
Semantic Textual Similarity (STS) is important for many applications such as Plagiarism Detection (PD), Text Paraphrasing and Information Retrieval (IR). Current methods for STS rely on statistical machine learning. Recent studies showed that neural networks for STS presented promising experimental results. In this paper, we propose an Attentive Siamese Long Short-Term Memory (LSTM) network for measuring Semantic Textual Similarity. Instead of external resources and handcraft features, raw sentence pairs and pre-trained word embedding are needed as input. Attention mechanism is utilized in LSTM network to capture high-level semantic information. We demonstrated the effectiveness of our model by applying the architecture in different tasks: three corpora and three language tasks. Experimental results on all tasks and languages show that our method with attention mechanism outperforms the baseline model with a higher correlation with human annotation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.