Compression garments are elastic clothing with an engineered compression gradient that can be worn on limbs, upper, lower, or full body to use for therapy and sports. This article presents an overview and review on the compression garments and concentrates on the design of compression garments with an appropriate pressure for specific applications. It covers the types of compression garments, fibers and yarns, knitted fabric construction, garment design, an evaluation system, and pressure measurement and modeling. The material properties, fabric properties, pressure modeling, and the garment design system presents the prediction, design, and fabrication of the compression garments. Lastly, the research status and directions are discussed.
SUMMARYMetamodels are widely used to facilitate the analysis and optimization of engineering systems that involve computationally expensive simulations. Kriging is a metamodelling technique that is well known for its ability to build surrogate models of responses with non-linear behaviour. However, the assumption of a stationary covariance structure underlying Kriging does not hold in situations where the level of smoothness of a response varies significantly. Although non-stationary Gaussian process models have been studied for years in statistics and geostatistics communities, this has largely been for physical experimental data in relatively low dimensions. In this paper, the non-stationary covariance structure is incorporated into Kriging modelling for computer simulations. To represent the non-stationary covariance structure, we adopt a non-linear mapping approach based on parameterized density functions. To avoid over-parameterizing for the high dimension problems typical of engineering design, we propose a modified version of the non-linear map approach, with a sparser, yet flexible, parameterization. The effectiveness of the proposed method is demonstrated through both mathematical and engineering examples. The robustness of the method is verified by testing multiple functions under various sampling settings. We also demonstrate that our method is effective in quantifying prediction uncertainty associated with the use of metamodels.
Perceptual learning is often orientation and location specific, which may indicate neuronal plasticity in early visual areas. However, learning specificity diminishes with additional exposure of the transfer orientation or location via irrelevant tasks, suggesting that the specificity is related to untrained conditions, likely because neurons representing untrained conditions are neither bottom-up stimulated nor top-down attended during training. To demonstrate these top-down and bottom-up contributions, we applied a “continuous flash suppression” technique to suppress the exposure stimulus into sub-consciousness, and with additional manipulations to achieve pure bottom-up stimulation or top-down attention with the transfer condition. We found that either bottom-up or top-down influences enabled significant transfer of orientation and Vernier discrimination learning. These results suggest that learning specificity may result from under-activations of untrained visual neurons due to insufficient bottom-up stimulation and/or top-down attention during training. High-level perceptual learning thus may not functionally connect to these neurons for learning transfer.DOI: http://dx.doi.org/10.7554/eLife.14614.001
To produce images that are suitable for display, tone-mapping is widely used in digital cameras to map linear color measurements into narrow gamuts with limited dynamic range. This introduces non-linear distortion that must be undone, through a radiometric calibration process, before computer vision systems can analyze such photographs radiometrically. This paper considers the inherent uncertainty of undoing the effects of tone-mapping. We observe that this uncertainty varies substantially across color space, making some pixels more reliable than others. We introduce a model for this uncertainty and a method for fitting it to a given camera or imaging pipeline. Once fit, the model provides for each pixel in a tone-mapped digital photograph a probability distribution over linear scene colors that could have induced it. We demonstrate how these distributions can be useful for visual inference by incorporating them into estimation algorithms for a representative set of vision tasks.
We develop a framework for extracting a concise representation of the shape information available from diffuse shading in a small image patch. This produces a mid-level scene descriptor, comprised of local shape distributions that are inferred separately at every image patch across multiple scales. The framework is based on a quadratic representation of local shape that, in the absence of noise, has guarantees on recovering accurate local shape and lighting. And when noise is present, the inferred local shape distributions provide useful shape information without over-committing to any particular image explanation. These local shape distributions naturally encode the fact that some smooth diffuse regions are more informative than others, and they enable efficient and robust reconstruction of object-scale shape. Experimental results show that this approach to surface reconstruction compares well against the state-of-art on both synthetic images and captured photographs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.