Social network has become a very popular way for internet users to communicate and interact online. Users spend plenty of time on famous social networks (e.g., Facebook, Twitter, Sina Weibo, etc.), reading news, discussing events and posting messages. Unfortunately, this popularity also attracts a significant amount of spammers who continuously expose malicious behavior (e.g., post messages containing commercial URLs, following a larger amount of users, etc.), leading to great misunderstanding and inconvenience on users' social activities. In this paper, a supervised machine learning based solution is proposed for an effective spammer detection. The main procedure of the work is: first, collect a dataset from Sina Weibo including 30,116 users and more than 16 million messages. Then, construct a labeled dataset of users and manually classify users into spammers and non-spammers. Afterwards, extract a set of feature from message content and users' social behavior, and apply into SVM (Support Vector Machines) based spammer detection algorithm. The experiment shows that the proposed solution is capable to provide excellent performance with true positive rate of spammers and non-spammers reaching 99.1% and 99.9% respectively.
The camera provides rich visual information regarding objects and becomes one of the most mainstream sensors in the automation community. However, it is often difficult to be applicable when the objects are not visually distinguished. On the other hand, tactile sensors can be used to capture multiple object properties, such as textures, roughness, spatial features, compliance, and friction, and therefore provide another important modality for the perception. Nevertheless, effective combination of the visual and tactile modalities is still a challenging problem. In this paper, we develop a visualtactile fusion framework for object recognition tasks. This paper uses the multivariate-time-series model to represent the tactile sequence and the covariance descriptor to characterize the image. Further, we design a joint group kernel sparse coding (JGKSC) method to tackle the intrinsically weak pairing problem in visual-tactile data samples. Finally, we develop a visual-tactile data set, composed of 18 household objects for validation. The experimental results show that considering both visual and tactile inputs is beneficial and the proposed method indeed provides an effective strategy for fusion. Note to Practitioners-Visual and tactile measurements offer complementary properties that make them particularly suitable for fusion in order to address the robust and accurate recognition of objects, which is a necessity for many automation systems.In this paper, we investigate a widely applicable scenario in grasp manipulation. When identifying an object, the manipulator may see it using the camera and touch it using its hand. Thus, we obtain a pair of test samples, including one image sample and one tactile sample. The manipulator then utilizes this sample pair to identify this object with a classifier that is constructed using the previously collected training samples. However, when collecting training samples, we may collect the image samples and the tactile samples separately. In other words, the training samples may not be paired, while the test samples are paired. This paper addresses this practical problem by developing a JGKSC method, which encourages the effects of the same group, but different atoms. Although our focus is on combining visual and tactile information, the described problem framework is common in the automation community. The algorithm described in this paper can therefore work with weak pairings between a variety of sensors.
This paper proposes a computationally efficient method for traffic sign recognition (TSR). This proposed method consists of two modules: 1) extraction of histogram of oriented gradient variant (HOGv) feature and 2) a single classifier trained by extreme learning machine (ELM) algorithm. The presented HOGv feature keeps a good balance between redundancy and local details such that it can represent distinctive shapes better. The classifier is a single-hidden-layer feedforward network. Based on ELM algorithm, the connection between input and hidden layers realizes the random feature mapping while only the weights between hidden and output layers are trained. As a result, layer-by-layer tuning is not required. Meanwhile, the norm of output weights is included in the cost function. Therefore, the ELM-based classifier can achieve an optimal and generalized solution for multiclass TSR. Furthermore, it can balance the recognition accuracy and computational cost. Three datasets, including the German TSR benchmark dataset, the Belgium traffic sign classification dataset and the revised mapping and assessing the state of traffic infrastructure (revised MASTIF) dataset, are used to evaluate this proposed method. Experimental results have shown that this proposed method obtains not only high recognition accuracy but also extremely high computational efficiency in both training and recognition processes in these three datasets.
In this paper, we propose a novel iterative multi-task framework to complete the segmentation mask of an occluded vehicle and recover the appearance of its invisible parts. In particular, to improve the quality of the segmentation completion, we present two coupled discriminators and introduce an auxiliary 3D model pool for sampling authentic silhouettes as adversarial samples. In addition, we propose a two-path structure with a shared network to enhance the appearance recovery capability. By iteratively performing the segmentation completion and the appearance recovery, the results will be progressively refined. To evaluate our method, we present a dataset, the Occluded Vehicle dataset, containing synthetic and real-world occluded vehicle images. We conduct comparison experiments on this dataset and demonstrate that our model outperforms the state-of-theart in tasks of recovering segmentation mask and appearance for occluded vehicles. Moreover, we also demonstrate that our appearance recovery approach can benefit the occluded vehicle tracking in real-world videos.
In a prospective study, 42 048 adults residing in Zhongshan City, Guangdong, China, were followed for 16 years, and 171 of them developed nasopharyngeal carcinoma (NPC). Although Epstein-Barr virus (EBV) antibody levels of the cohort fluctuated, the antibody levels of 93% of the patients with NPC were raised and maintained at high levels for up to 10 years prior to diagnosis. This suggests that the Serologic window affords an opportunity to monitor tumor progression during the preclinical stage of NPC development, facilitating early NPC detection. We reviewed the clinical records of the 171 patients with NPC in the prospective study to assess the efficacy of early NPC detection by Serologic screening and clinical examination. Of the 171 patients, 51 had Stage I tumor (44 were among the 73 patients detected by clinical examination and 7 were among the 98 patients presented to outpatient department). Initial Serologic screening predicted 58 (95.1%) of the 61 patients detected within 2 years. The risk of the screened population (58/3093) raised 13 times relative to cohort (61/42 048) during this period. Clinical examination detected all the 58 predicted cases, and 35 (60.3%) of which were diagnosed with Stage I tumor. The Serologic prediction rate fell to 33.6% (37/110) 2 to 16 years after screening. The proportion of cases detected by clinical examination fell to 40.5% (15/37). The proportion of Stage I tumors among the cases detected by clinical examination during both periods remained at about 60%. We concluded that early detection of NPC can be accomplished by repeated Serologic screening to maintain high prediction rates and by promptly examining screened subjects to detect tumors before the symptoms develop.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.