We present a real-time solution for generating detailed clothing deformations from pre-computed clothing shape examples. Given an input pose, it synthesizes a clothing deformation by blending skinned clothing deformations of nearby examples controlled by the body skeleton. Observing that cloth deformation can be well modeled with sensitivity analysis driven by the underlying skeleton, we introduce a sensitivity based method to construct a pose-dependent rigging solution from sparse examples. We also develop a sensitivity based blending scheme to find nearby examples for the input pose and evaluate their contributions to the result. Finally, we propose a stochastic optimization based greedy scheme for sampling the pose space and generating example clothing shapes. Our solution is fast, compact and can generate realistic clothing animation results for various kinds of clothes in real time.
Virtualized traffic via various simulation models and real‐world traffic data are promising approaches to reconstruct detailed traffic flows. A variety of applications can benefit from the virtual traffic, including, but not limited to, video games, virtual reality, traffic engineering and autonomous driving. In this survey, we provide a comprehensive review on the state‐of‐the‐art techniques for traffic simulation and animation. We start with a discussion on three classes of traffic simulation models applied at different levels of detail. Then, we introduce various data‐driven animation techniques, including existing data collection methods, and the validation and evaluation of simulated traffic flows. Next, we discuss how traffic simulations can benefit the training and testing of autonomous vehicles. Finally, we discuss the current states of traffic simulation and animation and suggest future research directions.
Recent advances in smart materials and microfabrication techniques lead to the development of microrobots for on-demand and targeted therapy. Self-folded hydrogel tubes are particularly promising vehicles as they provide relatively large surface area-to-volume ratio and cargo space for therapeutic agents. In this paper, we decorate these microstructures with an artificially approximated bacterial flagellum to enable efficient swimming in fluidic environments. Flexibility enhances overall motility of the soft microrobot through synergistic propulsion generated by the tubular body and the flagellum, a feature that has not been observed in conventional microrobots manufactured from rigid materials. While the flagellum is applying forward thrust, a precession is induced on the body due to wobbling of the tail that can provide extra speed depending on the tail design. A simple model based on resistive force theory explains the direction-dependent changes in swimming motility and the role of tail geometry.
We present a novel data-driven approach to populate virtual road networks with realistic traffic flows. Specifically, given a limited set of vehicle trajectories as the input samples, our approach first synthesizes a large set of vehicle trajectories. By taking the spatio-temporal information of traffic flows as a 2D texture, the generation of new traffic flows can be formulated as a texture synthesis process, which is solved by minimizing a newly developed traffic texture energy. The synthesized output captures the spatio-temporal dynamics of the input traffic flows, and the vehicle interactions in it strictly follow traffic rules. After that, we position the synthesized vehicle trajectory data to virtual road networks using a cage-based registration scheme, where a few traffic-specific constraints are enforced to maintain each vehicle's original spatial location and synchronize its motion in concert with its neighboring vehicles. Our approach is intuitive to control and scalable to the complexity of virtual road networks. We validated our approach through many experiments and paired comparison user studies.
We present a video-based approach to learn the specific driving characteristics of drivers in the video for advanced traffic control. Each vehicle's specific driving characteristics are calculated with an offline learning process. Given each vehicle's initial status and the personalized parameters as input, our approach can vividly reproduce the traffic flow in the sample video with a high accuracy. The learned characteristics can also be applied to any agent-based traffic simulation systems. We then introduce a new traffic animation method that attempts to animate each vehicle with its real driving habits and show its adaptation to the surrounding traffic situation. Our results are compared to existing traffic animation methods to demonstrate the effectiveness of our presented approach.
In recent times, it has been discovered that fuzzy logic is an important tool for effective traffic control system. Research on intelligent traffic systems has revealed that the implementation of linguistic variables in fuzzy systems help in taking care of diverse possible decisions that can be taken by humans in traffic control. However, the majority of these works focus on vehicular traffic without adequate consideration for pedestrian crossings. This research therefore focuses on incorporation of pedestrian crossing variables into vehicular traffic control using fuzzy logic. The implementation of the fuzzy logic inference system was carried out using MATLAB 2014a. The impact of pedestrian delay, total pedestrian were part of consideration for signal time allocation. The result proved that pedestrian delay has significant contribution to traffic control systems to enhance safety of pedestrians.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.