Sentiment analysis usually refers to the analysis of human-generated content via a polarity filter. Affective computing deals with the exact emotions conveyed through information. Emotional information most frequently cannot be accurately described by a single emotion class. Multilabel classifiers can categorize human-generated content in multiple emotional classes. Ensemble learning can improve the statistical, computational and representation aspects of such classifiers. We present a baseline stacked ensemble and propose a weighted ensemble. Our proposed weighted ensemble can use multiple classifiers to improve classification results without hyperparameter tuning or data overfitting. We evaluate our ensemble models with two datasets. The first dataset is from Semeval2018-Task 1 and contains almost 7000 Tweets, labeled with 11 sentiment classes. The second dataset is the Toxic Comment Dataset with more than 150,000 comments, labeled with six different levels of abuse or harassment. Our results suggest that ensemble learning improves classification results by 1.5 % to 5.4 % .
In this article, we present a force measuring method for assessing participant responses in studies of visual perception. We present a device disguised as a mouse pad and designed to measure mouse-click-pressure and click-press-to-release-time responses by unaware, as regards to the physiological assessment, participants. The aim of the current technology, in the current studies, was to provide a physiological assessment of confidence and task difficulty. We tested the device in three experiments. The studies comprised of a gender-recognition study using morphed male and female faces, a visual suppression study using backwards masking, and a target-search study that included deciding whether a letter was repeated in a subsequently presented letter string. Across all studies, higher task difficulty was associated with higher click-release-time responses. Higher task difficulty was, intriguingly, also associated with lower click pressure. Higher confidence ratings were consistently associated with higher click pressure and shorter click-release time across all experiments. These findings suggest that the current technology can be used to assess responses relating to task difficulty and participant confidence in studies of visual perception. We suggest that the assessment of release times can also be implemented using standard equipment, and we provide manual and easy-to-use code for the implementation.
Scientific projects that require human computation often resort to crowdsourcing. Interested individuals can contribute to a crowdsourcing task, essentially contributing towards the project's goals. To motivate participation and engagement, scientists use a variety of reward mechanisms. The most common motivation, and the one that yields the fastest results, is monetary rewards. By using monetary, scientists address a wider audience to participate in the task. As the payment is below minimum wage for developed economies, users from developing countries are more eager to participate. In subjective tasks, or tasks that cannot be validated through a right or wrong type of validation, monetary incentives could contrast with the much needed quality of submissions. We perform a subjective crowdsourcing task, emotion annotation, and compare the quality of the answers from contributors of varying income levels, based on the Gross Domestic Product. The results indicate a different contribution process between contributors from varying GDP regions. Low income contributors, possibly driven by the monetary incentive, submit low quality answers at a higher pace, while high income contributors provide diverse answers at a slower pace.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.