Neural representations have emerged as a new paradigm for applications in rendering, imaging, geometric modeling, and simulation. Compared to traditional representations such as meshes, point clouds, or volumes they can be flexibly incorporated into differentiable learning-based pipelines. While recent improvements to neural representations now make it possible to represent signals with fine details at moderate resolutions (e.g., for images and 3D shapes), adequately representing large-scale or complex scenes has proven a challenge. Current neural representations fail to accurately represent images at resolutions greater than a megapixel or 3D scenes with more than a few hundred thousand polygons. Here, we introduce a new hybrid implicit-explicit network architecture and training strategy that adaptively allocates resources during training and inference based on the local complexity of a signal of interest. Our approach uses a multiscale block-coordinate decomposition, similar to a quadtree or octree, that is optimized during training. The network architecture operates in two stages: using the bulk of the network parameters, a coordinate encoder generates a feature grid in a single forward pass. Then, hundreds or thousands of samples within each block can be efficiently evaluated using a lightweight feature decoder. With this hybrid implicit-explicit network architecture, we demonstrate the first experiments that fit gigapixel images to nearly 40 dB peak signal-to-noise ratio. Notably this represents an increase in scale of over 1000X compared to the resolution of previously demonstrated image-fitting experiments. Moreover, our approach is able to represent 3D shapes significantly faster and better than previous techniques; it reduces training times from days to hours or minutes and memory requirements by over an order of magnitude.
O estudo parte da caracterização geral das etapas que compõe o processo de modelagem matemática, enfatiza de modo especial a etapa para a qual a nossa abordagem está voltada (obtenção e validação de modelos) e em seguida aborda a estrutura do argumento proposto por Toulmin (2006) bem como a teoria dos tipos e níveis de prova em matemática de Nicolas Balacheff (1987,1988). Este último estudo, quando inserido no contexto do uso do TAP (Toulmin’s Argument Pattern) em um contexto de ensino baseado em modelagem configura uma abordagem teórica e metodológica que permite analisar a natureza das justificativas e apoios empregados no referido processo argumentativo bem como classificar os dados e avaliar a conclusão. Esta ferramenta também permite realizar uma avaliação diagnóstica do nível de conhecimento do aluno em elação ao emprego da prova e demonstração no Ensino de Física.Trata-se, portanto, de um instrumento que permite combinar todos os elementos do TAP à teoria de Nicolas Balacheff (1987, 1988) sobre os tipos e níveis de prova.
significantly faster and better than previous techniques; it reduces training times from days to hours or minutes and memory requirements by over an order of magnitude.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.