Molded silicone rubbers are common in manufacturing of soft robotic parts, but they are often prone to tears, punctures, and tensile failures when strained. In this article, we present a fabric compositing method for improving the mechanical properties of soft robotic parts by creating a fabric/rubber composite that increases the strength and durability of the molded rubber. Comprehensive ASTM material tests evaluating the strength, tear resistance, and puncture resistance are conducted on multiple composites embedded with different fabrics, including polyester, nylon, silk, cotton, rayon, and several blended fabrics. Results show that strong fabrics increase the strength and durability of the composite, valuable in pneumatic soft robotic applications, while elastic fabrics maintain elasticity and enhance tear strength, suitable for robotic skins or soft strain sensors. Two case studies then validate the proposed benefits of the fabric compositing for soft robotic pressure vessel applications and soft strain sensor applications. Evaluations of the fabric/rubber composite samples and devices indicate that such methods are effective for improving mechanical properties of soft robotic parts, resulting in parts that can have customized stiffness, strength, and vastly improved durability.
We have prepared Fe3O4 nanocrystal-embedded polyaniline hybrids with well-defined cluster-like morphology through macromolecule-induced self-assembly. These magnetic and electrically conductive composite nanoclusters show flowability at room temperature in the absence of any solvent, which offers great potential in applications such as microwave absorbents and electromagnetic shielding coatings. This macromolecule-induced self-assembly strategy can be readily applied on the fabrication of other ion oxide/conjugated polymer composites to achieve robust multifunctional materials.
Video anomaly detection is challenging because abnormal events are unbounded, rare, equivocal, irregular in real scenes. In recent years, transformers have demonstrated powerful modelling abilities for sequence data. Thus, we attempt to apply transformers to video anomaly detection. In this paper, we propose a prediction-based video anomaly detection approach named TransAnomaly. Our model combines the U-Net and the Video Vision Transformer (ViViT) to capture richer temporal information and more global contexts. To make full use of the ViViT for the prediction, we modified the ViViT to make it capable of video prediction. Experiments on benchmark datasets show that the addition of the transformer module improves the anomaly detection performance. In addition, we calculate regularity scores with sliding windows and evaluate the impact of different window sizes and strides. With proper settings, our model outperforms other state-of-the-art prediction-based video anomaly detection approaches. Furthermore, our model can perform anomaly localization by tracking the location of patches with lower regularity scores.
We study the problem of incorporating prior knowledge into a deep Transformer-based model, i.e., Bidirectional Encoder Representations from Transformers (BERT), to enhance its performance on semantic textual matching tasks. By probing and analyzing what BERT has already known when solving this task, we obtain better understanding of what task-specific knowledge BERT needs the most and where it is most needed. The analysis further motivates us to take a different approach than most existing works. Instead of using prior knowledge to create a new training task for fine-tuning BERT, we directly inject knowledge into BERT's multi-head attention mechanism. This leads us to a simple yet effective approach that enjoys fast training stage as it saves the model from training on additional data or tasks other than the main task. Extensive experiments demonstrate that the proposed knowledge-enhanced BERT is able to consistently improve semantic textual matching performance over the original BERT model, and the performance benefit is most salient when training data is scarce.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.