The development of paper-based sensors, antennas, and energy-harvesting devices can transform the way electronic devices are manufactured and used. Herein we describe an approach to fabricate paper thermoelectric generators for the first time by directly impregnating naturally abundant cellulose materials with p- or n-type colloidal semiconductor quantum dots. We investigate Seebeck coefficients and electrical conductivities as a function of temperature between 300 and 400 K as well as in-plane thermal conductivities using Angstrom's method. We further demonstrate equipment-free fabrication of flexible thermoelectric modules using p- and n-type paper strips. Leveraged by paper's inherently low thermal conductivity and high flexibility, these paper modules have the potential to efficiently utilize heat available in natural and man-made environments by maximizing the thermal contact to heat sources of arbitrary geometry.
Whilst computer vision models built using self-supervised approaches are now commonplace, some important questions remain. Do self-supervised models learn highly redundant channel features? What if a self-supervised network could dynamically select the important channels and get rid of the unnecessary ones? Currently, convnets pre-trained with self-supervision have obtained comparable performance on downstream tasks in comparison to their supervised counterparts in computer vision. However, there are drawbacks to self-supervised models including their large numbers of parameters, computationally expensive training strategies and a clear need for faster inference on downstream tasks. In this work, our goal is to address the latter by studying how a standard channel selection method developed for supervised learning can be applied to networks trained with self-supervision. We validate our findings on a range of target budgets td for channel computation on image classification task across different datasets, specifically CIFAR-10, CIFAR-100, and ImageNet-100, obtaining comparable performance to that of the original network when selecting all channels but at a significant reduction in computation reported in terms of FLOPs.
Human Computer Interaction is an upcoming scientific field which aims at inter-communication between humans and computers. A major element of this field is Human Emotion Recognition. The most expressive way humans display emotions is through facial expressions. Traditionally, emotion recognition has been performed on laboratory controlled data. While undoubtedly worthwhile at the time, such lab controlled data poorly represents the environment and conditions faced in realworld situations. With the increase in the number of video clips online, it is worthwhile to explore the performance of emotion recognition methods that work 'in the wild' .This work mainly focuses on automatic emotion recognition in a wild video sample. In this task, we have worked on the problem of human emotion recognition using a combination of video features and audio features. The technique that we have utilized for emotion detection involves a blend of Optical flow, Gabor Filtering, few other facial features and audio features. Training and Classification is performed using Support Vector MachineHidden Markov Model (HMM). The unique thing about our methodology is that it produces better results for some particular class of emotions as compared to the baseline score in the case of wild emotion dataset with an overall accuracy of 20.51% on the test set.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.