We study the reinforcement learning problem of complex action control in the Multi-player Online Battle Arena (MOBA) 1v1 games. This problem involves far more complicated state and action spaces than those of traditional 1v1 games, such as Go and Atari series, which makes it very difficult to search any policies with human-level performance. In this paper, we present a deep reinforcement learning framework to tackle this problem from the perspectives of both system and algorithm. Our system is of low coupling and high scalability, which enables efficient explorations at large scale. Our algorithm includes several novel strategies, including control dependency decoupling, action mask, target attention, and dual-clip PPO, with which our proposed actor-critic network can be effectively trained in our system. Tested on the MOBA game Honor of Kings, the trained AI agents can defeat top professional human players in full 1v1 games.
We present a system for adaptive synthesis of indoor scenes given an empty room and only a few object categories. Automatically suggesting indoor objects and proper layouts to convert an empty room to a 3D scene is challenging, since it requires interior design knowledge to balance the factors like space, path distance, illumination and object relations, in order to insure the functional plausibility of the synthesized scenes. We exploit a database of 2D floor plans to extract object relations and provide layout examples for scene synthesis. With the labeled human positions and directions in each plan, we detect the activity relations and compute the coexistence frequency of object pairs to construct activity-associated object relation graphs. Given the input room and user-specified object categories, our system first leverages the object relation graphs and the database floor plans to suggest more potential object categories beyond the specified ones to make resulting scenes functionally complete, and then uses the similar plan references to create the layout of synthesized scenes. We show various synthesis results to demonstrate the practicability of our system, and validate its usability via a user study. We also compare our system with the state-of-the-art furniture layout and activity-centric scene representation methods, in terms of functional plausibility and user friendliness.
Inverse synthetic aperture radar (ISAR) images can be obtained using digital video broadcasting-terrestrial (DVB-T)-based passive radars. However, television broadcast-transmitted signals offer poor range resolution for imaging purposes, because they have a narrower bandwidth with respect to those transmitted by a dedicated ISAR system. To reach finer range resolutions, signals composed of multiple DVB-T channels are required. Problems arise, however, because DVB-T channels are typically widely separated in the frequency domain. The gaps between channels produce high grating Manuscript lobes in the image domain when Fourier-based algorithms are used to form the ISAR image. In this paper, compressive sensing theory is investigated to address this problem because of its ability to reconstruct sparse signals by using incomplete measures. By solving an optimization problem under the constraint of signal sparsity, passive ISAR images can be obtained with strongly reduced grating lobes. Both simulation and experimental results are shown to demonstrate the validity of the proposed approach.
This paper proposes a synthetic aperture radar (SAR) automatic target recognition approach based on a global scattering center model. The scattering center model is established offline using range profiles at multiple viewing angles, so the original data amount is much less than that required for establishing SAR image templates. Scattering center features at different target poses can be conveniently predicted by this model. Moreover, the model can be modified to predict features for various target configurations. For the SAR image to be classified, regional features in different levels are extracted by thresholding and morphological operations. The regional features will be matched to the predicted scattering center features of different targets to arrive at a decision. This region-to-point matching is much easier to implement and is less sensitive to nonideal factors such as noise and pose estimation error than point-to-point matching. A matching scheme going through from coarse to fine regional features in the inner cycle and going through different pose hypotheses in the outer cycle is designed to improve the efficiency and robustness of the classifier. Experiments using both data predicted by a highfrequency electromagnetic (EM) code and data measured in the MSTAR program verify the validity of the method.Index Terms-Ground target, model based, scattering center, synthetic aperture radar (SAR) automatic target recognition (ATR).
Modeling of realistic garments is essential for online shopping and many other applications including virtual characters. Most of existing methods either require a multi‐camera capture setup or a restricted mannequin pose. We address the garment modeling problem according to a single input image. We design an all‐pose garment outline interpretation, and a shading‐based detail modeling algorithm. Our method first estimates the mannequin pose and body shape from the input image. It further interprets the garment outline with an oriented facet decided according to the mannequin pose to generate the initial 3D garment model. Shape details such as folds and wrinkles are modeled by shape‐from‐shading techniques, to improve the realism of the garment model. Our method achieves similar result quality as prior methods from just a single image, significantly improving the flexibility of garment modeling.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.