Recent deep networks that directly handle points in a point set, e.g., PointNet, have been state-of-the-art for supervised learning tasks on point clouds such as classification and segmentation. In this work, a novel end-toend deep auto-encoder is proposed to address unsupervised learning challenges on point clouds. On the encoder side, a graph-based enhancement is enforced to promote local structures on top of PointNet. Then, a novel folding-based decoder deforms a canonical 2D grid onto the underlying 3D object surface of a point cloud, achieving low reconstruction errors even for objects with delicate structures. The proposed decoder only uses about 7% parameters of a decoder with fully-connected neural networks, yet leads to a more discriminative representation that achieves higher linear SVM classification accuracy than the benchmark. In addition, the proposed decoder structure is shown, in theory, to be a generic architecture that is able to reconstruct an arbitrary point cloud from a 2D grid. Our code is available at http://www.merl.com/research/ license#FoldingNet
Unlike on images, semantic learning on 3D point clouds using a deep network is challenging due to the naturally unordered data structure. Among existing works, Point-Net has achieved promising results by directly learning on point sets. However, it does not take full advantage of a point's local neighborhood that contains fine-grained structural information which turns out to be helpful towards better semantic learning. In this regard, we present two new operations to improve PointNet with a more efficient exploitation of local structures. The first one focuses on local 3D geometric structures. In analogy to a convolution kernel for images, we define a point-set kernel as a set of learnable 3D points that jointly respond to a set of neighboring data points according to their geometric affinities measured by kernel correlation, adapted from a similar technique for point cloud registration. The second one exploits local high-dimensional feature structures by recursive feature aggregation on a nearest-neighbor-graph computed from 3D positions. Experiments show that our network can efficiently capture local information and robustly achieve better performances on major datasets. Our code is available at http://www.merl.com/research/ license#KCNet
Anti-cancer peptides (ACPs) are a series of short peptides composed of 10–60 amino acids that can inhibit tumour cell proliferation or migration, or suppress the formation of tumour blood vessels, and are less likely to cause drug resistance. The aforementioned merits make ACPs the most promising anti-cancer candidate. However, ACPs may be degraded by proteases, or result in cytotoxicity in many cases. To overcome these drawbacks, a plethora of research has focused on reconstruction or modification of ACPs to improve their anti-cancer activity, while reducing their cytotoxicity. The modification of ACPs mainly includes main chain reconstruction and side chain modification. After summarizing the classification and mechanism of action of ACPs, this paper focuses on recent development and progress about their reconstruction and modification. The information collected here may provide some ideas for further research on ACPs, in particular their modification.
Cloud providers have recently introduced new offerings whereby spare computing resources are accessible at discounts compared to on-demand computing. Exploiting such opportunity is challenging inasmuch as such resources are accessed with low-priority and therefore can elastically leave (through preemption) and join the computation at any time. In this paper, we design a new technique called coded elastic computing enabling distributed computations over elastic resources. The proposed technique allows machines to leave the computation without sacrificing the algorithm-level performance, and, at the same time, adaptively reduce the workload at existing machines when new ones join the computation. Leveraging coded redundancy, our approach is able to achieve similar computational cost as the original (uncoded) method when all machines are present; the cost gracefully increases when machines are preempted and reduces when machines join. The performance of the proposed technique is evaluated on matrix-vector multiplication and linear regression tasks. In experimental validations, it can achieve exactly the same numerical result as the noiseless computation, while reducing the computation time by 46% when compared to non-adaptive coding schemes.
In this paper, we present a new water-filling algorithm for power allocation in Orthogonal Frequency Division Multiplexing (OFDM) -based cognitive radio systems. The conventional water-filling algorithm cannot be directly employed for power allocation in a cognitive radio system, because there are more power constraints in the cognitive radio power allocation problem than in the classic OFDM system. In this paper, a novel algorithm based on iterative water-filling is presented to overcome such limitations. However, the computational complexity in iterative water-filling is very high. Thus, we explore features of the water-filling algorithm and propose a low-complexity algorithm using power-increment or power-decrement water-filling processes. Simulation results show that our proposed algorithms can achieve the optimal power allocation performance in less time than the iterative water-filling algorithms.Keywords-cognitive radio, orthogonal frequency division multiplexing, water-filling algorithm, power allocation.
We consider the problem of computing a binary linear transformation when all circuit components are unreliable. Two models of unreliable components are considered: probabilistic errors and permanent errors. We introduce the "ENCODED" technique that ensures that the error probability of the computation of the linear transformation is kept bounded below a small constant independent of the size of the linear transformation even when all logic gates in the computation are noisy. By deriving a lower bound, we show that in some cases, the computational complexity of the ENCODED technique achieves the optimal scaling in error probability. Further, we examine the gain in energy-efficiency from use of a "voltagescaling" scheme where gate-energy is reduced by lowering the supply voltage. We use a gate energy-reliability model to show that tuning gate-energy appropriately at different stages of the computation ("dynamic" voltage scaling), in conjunction with ENCODED, can lead to orders of magnitude energy-savings over the classical "uncoded" approach. Finally, we also examine the problem of computing a linear transformation when noiseless decoders can be used, providing upper and lower bounds to the problem.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.