The ability of superhydrophobic surfaces to stay dry, self-clean and avoid biofouling is attractive for applications in biotechnology, medicine and heat transfer 1-10 . It requires that water droplets placed on superhydrophobic surfaces have large apparent contact angles (θ* > 150°) and low roll-off angles (θroll-off < 10°), realized with surfaces having low-surface-energy chemistry as well as micro-or nanoscale surface roughness that minimizes liquid-solid contact 11-17 . But rough surfaces where liquid contacts only a small
The extensive computational burden limits the usage of CNNs in mobile devices for dense estimation tasks. In this paper, we present a lightweight network to address this problem, namely LEDNet, which employs an asymmetric encoderdecoder architecture for the task of real-time semantic segmentation. More specifically, the encoder adopts a ResNet as backbone network, where two new operations, channel split and shuffle, are utilized in each residual block to greatly reduce computation cost while maintaining higher segmentation accuracy. On the other hand, an attention pyramid network (APN) is employed in the decoder to further lighten the entire network complexity. Our model has less than 1M parameters, and is able to run at over 71 FPS in a single GTX 1080Ti GPU. The comprehensive experiments demonstrate that our approach achieves state-of-the-art results in terms of speed and accuracy trade-off on CityScapes dataset.
Figure 1: Reintroducing folds into captured garments: (left) input video-frames, (center) typical capture result [BPS * 08], and (right) t-shirt wrinkled by our data-driven method. AbstractThe presence of characteristic fine folds is important for modeling realistic looking virtual garments. While recent garment capture techniques are quite successful at capturing the low-frequency garment shape and motion over time, they often fail to capture the numerous high-frequency folds, reducing the realism of the reconstructed spacetime models. In our work we propose a method for reintroducing fine folds into the captured models using datadriven dynamic wrinkling. We first estimate the shape and position of folds based on the original video footage used for capture and then wrinkle the surface based on those estimates using space-time deformation. Both steps utilize the unique geometric characteristics of garments in general, and garment folds specifically, to facilitate the modeling of believable folds. We demonstrate the effectiveness of our wrinkling method on a variety of garments that have been captured using several recent techniques.
The recent years have witnessed great advances for semantic segmentation using deep convolutional neural networks (DCNNs). However, a large number of convolutional layers and feature channels lead to semantic segmentation as a computationally heavy task, which is disadvantage to the scenario with limited resources. In this paper, we design an efficient symmetric network, called (ESNet), to address this problem. The whole network has nearly symmetric architecture, which is mainly composed of a series of factorized convolution unit (FCU) and its parallel counterparts. On one hand, the FCU adopts a widely-used 1D factorized convolution in residual layers. On the other hand, the parallel version employs a transform-split-transform-merge strategy in the designment of residual module, where the split branch adopts dilated convolutions with different rate to enlarge receptive field. Our model has nearly 1.6M parameters, and is able to be performed over 62 FPS on a single GTX 1080Ti GPU. The experiments demonstrate that our approach achieves state-of-the-art results in terms of speed and accuracy trade-off for realtime semantic segmentation on CityScapes dataset.
Using contextual information for scene labeling has gained substantial attention in the fields of image processing and computer vision. In this paper, a fusion model using flexible segmentation graph (FSG) is presented to explore multiscale context for scene labeling problem. Given a family of segmentations, the representation of FSG is established based on the spatial relationship of these segmentations. In the scenario of FSG, the labeling inference process is formulated as a contextual fusion model, trained from the discriminative classifiers. Compared to previous approaches, which usually employ Conditional Random Fields (CRFs) or hierarchical models to explore contextual information, our FSG representation is flexible and efficient without hierarchical constraint, allowing us to capture a wide variety of visual context for the task of image labeling. Our approach yields state-of-the-art results on the MSRC dataset (21 classes) and the LHI dataset (15 classes), and near-record results on the SIFT Flow dataset (33 classes) and PASCAL VOC segmentation dataset (20 classes), while producing a 320 × 240 scene labeling in less than a second. A remarkable fact is that our approach also outperforms recent CNN-based methods.
Routability is one of the primary objectives in placement. There have been many researches on forecasting routing problems and improving routability in placement but no perfect solution is found. Most traditional routability-driven placers aim to improve global routing result, but true routability lies in detailed routing. Predicting detailed routing routability in placement is extremely difficult due to the complexity and uncertainty of routing. In this paper, we propose a new detailed routing routability prediction model based on supervised learning. After extracting key features in placement and detailed routing, multivariate adaptive regression is performed to train the connection between these two stages. Using a well-trained model, most design rule violations after detailed routing can be foreseen in placement stage. Experiments show that our average prediction accuracy is 79.8%, which is comparable with other state-of-art routability estimation techniques.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.