This study intends to establish the correlation between charging current under stepwise increased electric field and space charge formation in low density polyethylene (LDPE) and LDPE/ZnO nanocomposites. By applying stepwise increased electric field, the charging current and space charge distribution were measured as a function of time under 10, 30 and 50 kV/mm, respectively. The results show that homopolar space charge formation induces increasing charging current with time, while heteropolar space charge formation induces decaying charging current and excellent space charge suppression is accompanied by constant charging current with time. The abundant deep trapping states introduced by nanofillers dominate both the lower steady charging current and space charge suppression in the nanocomposite. The results indicate that charging current behavior induced by stepwise increased electric field is extremely informative and is feasible to be a direct method for rough estimation of space charge formation and suppression in polymer dielectrics. Addition of ZnO nanofillers suppresses space charge accumulation in LDPE, indicating LDPE/ZnO nanocomposite is a potential insulation material for high voltage direction current cables.Index Termsstepwise increased electric field, charging current, space charge, nanodielectrics, LDPE/ZnO.
In recent years, image fusion has been a research hotspot. However, it is still a big challenge to balance the problems of noiseless image fusion and noisy image fusion. In order to improve the weak performance and low robustness of existing image fusion algorithms in noisy images, an infrared and visible image fusion algorithm based on optimized low-rank matrix factorization with guided filtering is proposed. First, the minimized error reconstruction factorization is introduced into the low-rank matrix, which effectively enhances the optimization performance, and obtains the base image with good filtering performance. Then using the base image as the guide image, the source image is decomposed into the high-frequency layer containing detail information and noise, and the low-frequency layer containing energy information through guided filtering. According to the noise intensity, the sparse reconstruction error is adaptively obtained to fuse the high-frequency layers, and the weighted average strategy is utilized to fuse the low-frequency layers. Finally, the fusion image is obtained by reconstructing the pre-fused high-frequency layer and the pre-fused low-frequency layer. The comparative experiments show that the proposed algorithm not only has good performance for noise-free images, but more importantly, it can effectively deal with the fusion of noisy images.
Infrared and visible image fusion combine data information from different sensors to achieve a richer description of the same scene. In order to highlight the salient features of the infrared image and the visible image in the fusion image and obtain a fusion image with good performance, an end-to-end infrared and visible image fusion algorithm is proposed in this paper. The contrast attention module and visible image cascade part are introduced in the generator, so that the fusion image can focus on the detail information in the visible image and the contrast information in the infrared image. And in order to retain more structural contour information in the original image, the contour loss is added to the content loss function. In addition, the contrast and detail information in infrared and visible images are balanced by two discriminators. And a goal-guided reward function is introduced into the discriminator, which further facilitates the generator to produce effective fused images. Finally, extensive fusion experiments on public datasets verify the advantages of the proposed algorithm compared with other classical algorithms, and ablation experiments demonstrate the effectiveness of the improved part of the algorithm.INDEX TERMS end-to-end infrared and visible image fusion; contrast attention module; contour loss; target-guided reward function;
To improve the fusion performance of infrared and visible images and effectively retain the edge structure information of the image, a fusion algorithm based on iterative control of anisotropic diffusion and regional gradient structure is proposed. First, the iterative control operator is introduced into the anisotropic diffusion model to effectively control the number of iterations. Then, the image is decomposed into a structure layer containing detail information and a base layer containing residual energy information. According to the characteristics of different layers, different fusion schemes are utilized. The structure layer is fused by combining the regional structure operator and the structure tensor matrix, and the base layer is fused through the Visual Saliency Map. Finally, the fusion image is obtained by reconstructing the structure layer and the energy layer. Experimental results show that the proposed algorithm can not only effectively deal with the fusion of infrared and visible images but also has high efficiency in calculation.
The temporal convolutional network (TCN) model has the characteristics of strong parallelism and stable gradient in time series processing. The structural depth of the model is related to the input length, convolution kernel and dilation factor. In order to further improve the accuracy of prediction, this paper proposes a software aging prediction framework based on TCN model optimized by grey relational analysis. Collecting available memory data as the input of the framework, determine the length of the input nodes of the TCN model through gray correlation analysis, and then conduct training and prediction, and evaluate the efficiency of the model by checking the average error between the predicted output memory and the actual memory. Then change the length of the input chunk to carry out a comparative experiment, which verifies the effectiveness of the grey relational degree analysis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.