3 256 Down sample 3 32 3 32 Projected depth images LSTM-CF + 3DFCN 3 32 3 32 3 32 3 32 Local patches Global structure guidance Completion Global Structure Inference Encoder + Decoder (3DConv + 3DDeconv) Local Geometry Refinement 32 P 32 S 256 D 2 128 2 128Figure 1: Pipeline of our high-resolution shape completion method. Given a 3D shape with large missing regions, our method outputs a complete shape through global structure inference and local geometry refinement. Our architecture consists of two jointly trained sub-networks: one network predicts the global structure of the shape while the other locally generates the repaired surface under the guidance of the first network. AbstractWe propose a data-driven method for recovering missing parts of 3D shapes. Our method is based on a new deep learning architecture consisting of two sub-networks: a global structure inference network and a local geometry refinement network. The global structure inference network incorporates a long short-term memorized context fusion module (LSTM-CF) that infers the global structure of the shape based on multi-view depth information provided as part of the input. It also includes a 3D fully convolutional (3DFCN) module that further enriches the global structure representation according to volumetric information in the input. Under the guidance of the global structure network, the local geometry refinement network takes as input local 3D patches around missing regions, and progressively produces a high-resolution, complete surface through a volumetric encoder-decoder architecture. Our method jointly trains the global structure inference and local geometry refinement networks in an end-to-end manner. We perform qualitative and quantitative evaluations on six object categories, demonstrating that our method outperforms existing state-of-the-art work on shape completion.
Our method generates multi-view depth maps and silhouettes, and uses a rendering function to obtain the 3D shapes. Right: We can also extend our framework to reconstruct 3D shapes from single/multi-view depth maps or silhouettes.
β-Glucan particles (GPs) are purified Saccharomyces cerevisiae cell walls treated so that they are primarily β1,3-d-glucans and free of mannans and proteins. GPs are phagocytosed by dendritic cells (DCs) via the Dectin-1 receptor, and this interaction stimulates proinflammatory cytokine secretion by DCs. As the hollow, porous GP structure allows for high antigen loading, we hypothesized that antigen-loaded GPs could be exploited as a receptor-targeted vaccine delivery system. Ovalbumin (OVA) was electrostatically complexed inside the hollow GP shells (GP-OVA). Incubation of C57BL/6J mouse bone marrow-derived DCs with GP-OVA resulted in phagocytosis, upregulation of maturation markers, and rapid proteolysis of OVA. Compared with free OVA, GP-OVA was >100-fold more potent at stimulating the proliferation of OVA-reactive transgenic CD8+ OT-I and CD4+ OT-II T cells, as measured by in vitro [3H]thymidine incorporation using DCs as antigen-presenting cells. Next, immune responses in C57BL/6J mice following subcutaneous immunizations with GP-OVA were compared with those in C57BL/6J mice following subcutaneous immunizations with OVA absorbed onto the adjuvant alum (Alum/OVA). Vaccination with GP-OVA stimulated substantially higher antigen-specific CD4+ T-cell lymphoproliferative and enzyme-linked immunospot (ELISPOT) responses than that with Alum/OVA. Moreover, the T-cell responses induced by GP-OVA were Th1 biased (determined by gamma interferon [IFN-γ] ELISPOT assay) and Th17 biased (determined by interleukin-17a [IL-17a] ELISPOT assay). Finally, both the GP-OVA and Alum/OVA formulations induced strong secretions of IgG1 subclass anti-OVA antibodies, although only GP-OVA induced secretion of Th1-associated IgG2c antibodies. Thus, the GP-based vaccine platform combines adjuvanticity and antigen delivery to induce strong humoral and Th1- and Th17-biased CD4+ T-cell responses.
partial scan to shape matching shape segmentation keypoint matching affordance prediction ("palm") i view based convolutional network .. . .. . ... point descriptor Fig. 1. We present a view-based convolutional network that produces local, point-based shape descriptors. The network is trained such that geometrically and semantically similar points across different 3D shapes are embedded close to each other in descriptor space (left). Our produced descriptors are quite generic -they can be used in a variety of shape analysis applications, including dense matching, prediction of human affordance regions, partial scan-to-shape matching, and shape segmentation (right).We present a new local descriptor for 3D shapes, directly applicable to a wide range of shape analysis problems such as point correspondences, semantic segmentation, affordance prediction, and shape-to-scan matching. The descriptor is produced by a convolutional network that is trained to embed geometrically and semantically similar points close to one another in descriptor space. The network processes surface neighborhoods around points on a shape that are captured at multiple scales by a succession of progressively zoomed out views, taken from carefully selected camera positions. We leverage two extremely large sources of data to train our network. First, since our network processes rendered views in the form of 2D images, we repurpose architectures pre-trained on massive image datasets. Second, we automatically generate a synthetic dense point correspondence dataset by non-rigid alignment of corresponding shape parts in a large collection of segmented 3D models. As a result of these design choices, our network effectively encodes multi-scale local context and fine-grained surface detail. Our network can be trained to produce either category-specific descriptors or more generic descriptors by learning from multiple shape categories. Once trained, at test time, the network extracts local descriptors for shapes without requiring any part segmentation as input. Our method can produce effective local descriptors even for shapes whose category is unknown or different from the ones used while training. We demonstrate through several experiments that our learned local descriptors are more discriminative © 2017 Association for Computing Machinery. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record is published in ACM Transactions on Graphics.compared to state of the art alternatives, and are effective in a variety of shape analysis applications.
Development of a vaccine to protect against cryptococcosis is a priority given the enormous global burden of disease in at-risk individuals. Using glucan particles (GPs) as a delivery system, we previously demonstrated that mice vaccinated with crude Cryptococcus-derived alkaline extracts were protected against lethal challenge with Cryptococcus neoformans and Cryptococcus gattii. The goal of the present study was to identify protective protein antigens that could be used in a subunit vaccine. Using biased and unbiased approaches, six candidate antigens (Cda1, Cda2, Cda3, Fpd1, MP88, and Sod1) were selected, recombinantly expressed in Escherichia coli, purified, and loaded into GPs. Three mouse strains (C57BL/6, BALB/c, and DR4) were then vaccinated with the antigen-laden GPs, following which they received a pulmonary challenge with virulent C. neoformans and C. gattii strains. Four candidate vaccines (GP-Cda1, GP-Cda2, GP-Cda3, and GP-Sod1) afforded a significant survival advantage in at least one mouse model; some vaccine combinations provided added protection over that seen with either antigen alone. Vaccine-mediated protection against C. neoformans did not necessarily predict protection against C. gattii. Vaccinated mice developed pulmonary inflammatory responses that effectively contained the infection; many surviving mice developed sterilizing immunity. Predicted T helper cell epitopes differed between mouse strains and in the degree to which they matched epitopes predicted in humans. Thus, we have discovered cryptococcal proteins that make promising candidate vaccine antigens. Protection varied depending on the mouse strain and cryptococcal species, suggesting that a successful human subunit vaccine will need to contain multiple antigens, including ones that are species specific.
Most previous image matting methods require a roughlyspecificed trimap as input, and estimate fractional alpha values for all pixels that are in the unknown region of the trimap. In this paper, we argue that directly estimating the alpha matte from a coarse trimap is a major limitation of previous methods, as this practice tries to address two difficult and inherently different problems at the same time: identifying true blending pixels inside the trimap region, and estimate accurate alpha values for them. We propose AdaMatting, a new end-to-end matting framework that disentangles this problem into two sub-tasks: trimap adaptation and alpha estimation. Trimap adaptation is a pixelwise classification problem that infers the global structure of the input image by identifying definite foreground, background, and semi-transparent image regions. Alpha estimation is a regression problem that calculates the opacity value of each blended pixel. Our method separately handles these two sub-tasks within a single deep convolutional neural network (CNN). Extensive experiments show that AdaMatting has additional structure awareness and trimap fault-tolerance. Our method achieves the state-ofthe-art performance on Adobe Composition-1k dataset both qualitatively and quantitatively. It is also the current bestperforming method on the alphamatting.com online evaluation for all commonly-used metrics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.